article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
dedicated hadronic b factory experiments as hera - b are designed as forward spectrometers , in adjustment to the huge lorentz boost under which the b decay particles are produced .the hera - b tracking concept relies on the propagation of track candidates which have been found in the pattern tracker in the field - free area upstream through the spectrometer magnet .the _ concurrent track evolution _strategy which employs the kalman filter technique is used to cope with the large track densities and still give a high track finding efficiency .practical implementation of this concept requires a fast and precise procedure to transport both the track parameters and its covariance matrix estimate , in spite of the inhomogeneity of the magnetic field .first solution based on a fifth - order runge - kutta method have been shown in .this note presents the mathematical basis and the program implementation of an approach which was developed to achieve a further gain in speed , and at the same time warrant sufficient accuracy for an optimal operation of the track finding process .this was achieved by providing a set of methods which are optimized for different transport ranges , and by testing the validity of each approximation directly within the track finding application .the kalman filter technique is used very often in high energy physics experiments .we include sec .[ applic ] in the revised version of the note to discuss the optimized implementation of the kalman filter for magnet tracking , which reduces strongly the amount of computation .it was not described in detail in .in the following we will use a coordinate system in which the axis points along the proton beam , the axis is directed normal to it in the horizontal plane , pointing towards the inside of the hera ring , and the axis is oriented upwards such that , and form a right - handed system .this system is identical to the _ arte _coordinate system defined in the appendix of .the following choice of track parameters is suited for fixed target experiments with relatively small transverse momenta .a particle with momentum , charge and coordinates is described by the state vector at a reference coordinate : where and are the transverse coordinates , and are the track slopes , and .the parameter is the particle charge in units of the elementary charge .we use the following units : are in centimeters , is in gev / c and the magnetic field in kgauss .the trajectory of a particle in a static magnetic field , neglecting stochastic perturbations as energy loss and multiple scattering , must satisfy the equations of motion : where parameter is proportional to the velocity of light and is therefore defined as and the functions , are \;\ ; , \\a_{y}=(1+t_{x}^{2}+t_{y}^2)^{\frac{1}{2}}\cdot \left[-t_{x}\cdot(t_{y}b_{y}+b_{z } ) + ( 1+t_{y}^{2})b_{x } \right ] \;\ ; .\\ \end{array}\ ] ] the initial values at are let us assume we should propagate the particle parameters from plane to plane . for solution of the equations ( [ dxdz ] ) three different methods are used .the choice depends on the distance between these planes ._ _ : a parabolic expansion of the particle trajectory is used \2 ._ _ : the classical fourth - order runge - kutta method is selected to find solution of the equations ( [ dxdz ] ) ._ _ : a fifth - order runge - kutta method with adaptive step size control is used .use of the kalman filter technique for pattern recognition requires that particle parameters and their covariance matrix can be transported to the location of the next measurement .evaluation of the derivatives of the state vector components with respect to their initial values ( [ x0 ] ) is needed to transport the covariance matrix .this is achieved by integrating the equations for derivatives together with the ` zero trajectory ' ( [ dxdz ] ) .let us assume the magnetic field is smooth enough and field gradients can be neglected . in this case , the equations ( [ dxdz ] ) are invariant with respect to small shifts by and .this means that derivatives with respect to initial , are trivial : to obtain equations for , let us differentiate equations ( [ dxdz ] ) with respect to and change the order of the derivative operators and on the left hand sides : , \\d / dz(\partial t_{y } / \partial t_{x0 } ) = & q_{0 } \cdot \upsilon \cdot \left[(\partial a_{y}/\partial t_{x } ) ( \partial t_{x}/\partial t_{x0 } ) + ( \partial a_{y}/\partial t_{y } ) ( \partial t_{y}/\partial t_{x0 } ) \right ] , \\ \partial q / \partial t_{x0 } \= & 0 , \end{array}\ ] ] where initial values for the solution of equations ( [ dtx ] ) are : the equations for are similar to equations ( [ dtx ] ) , but the initial values are : to obtain equations for , let us differentiate the equations ( [ dxdz ] ) with respect to and change the order of the derivative operators and in the left parts : \ ; , \\ d / dz(\partial t_{y } / \partial q_{0 } ) = & \upsilon \cdot a_{y } + \upsilon \cdot q_{0 } \cdot \left[(\partial a_{y}/\partial t_{x } ) ( \partial t_{x}/\partial q_{0 } ) + ( \partial a_{y}/\partial t_{y } ) ( \partial t_{y}/\partial q_{0 } ) \right ] \ ; , \\ \partial q / \partial q_{0 } = & 1 \ ; . \end{array}\ ] ] initial values for the solution of equations ( [ dq ] ) are : in the case of the following relations between , and their derivatives are valid : the procedure of evaluating the derivatives is simplified when these relations are taken into account .* approximation a * : the vector can be approximated as : where the derivatives are solutions of the system of equations \ ; , \\ \partial q / \partial t_{x0 } \= & 0 \ ; \end{array } \;\;\;\;\;\ ; % ( 11)\ ] ] with initial values ( [ dtx0 ] ) .the solution for the derivatives in the same approximation is similar : the derivatives are solutions of the equations ( [ dq ] ) with the initial values given in ( [ dq0 ] ) . * approximation b * : this is the most drastic simplification - the derivatives as well as are neglected in the corresponding equations .the system of equations ( [ dtx ] ) is simplified to : the solution of this system with initial values ( [ dtx0 ] ) is the solution of similar equations for derivatives is for the vector of derivatives we obtain : where the derivatives are solutions of the system of equations with initial values from ( [ dq0 ] ) .we use the notations from to describe the practical implementation of the kalman filter technique for track following in the magnet .the system state vector ( [ x ] ) after inclusion of measurements is denoted by , and its covariance matrix by .the coordinate measurement corresponding to the hit is denoted by .the hera - b magnet tracking detectors ( drift tubes and micro - strip gaseous chambers ) measured only one coordinate and is a scalar and its covariance matrix contains only one element .the relation between the track parameters and the expected measurement is described by the projection matrix .the matrix has the structure : for a detector plane measuring only coordinates ( signal wires of the drift tubes or anode strips of the msgc are parallel to the axis ) , and for stereo planes rotated around the axis .the predicted state vector is determined as the solution of the equations ( [ dxdz ] ) with the initial value .the covariance matrix of the vector is obtained by the propagation of the matrix : where denotes the covariance matrix of the process noise and the transport matrix is the estimated residual and variance become the updating of the system state vector after inclusion of the measurement is obtained with the filter equations : with the filtered residuals the contribution of the filtered point is given by : ( 15.5,8 ) ( -.80,0.0)file = fig_1_1.eps , width=8.95 cm ( 6.7,4.9)(0,0)[t]a ) ( 7.7,0.0)file = fig_1_3.eps , width=8.95 cm ( 15.0,4.9)(0,0)[t]b ) because of sparse projection matrices ( [ h1],[h2 ] ) , the calculation in ( [ rk][rrk ] ) becomes rather simple . for the matrix the variance of estimated residuals is i.e. it involves 6 multiplications .here we use the notation for the same case , each of the five elements of the gain matrix is calculated as the calculation of the gain matrix includes 15 multiplications and 1 division . the most time consuming operations are the propagation of the covariance matrix in ( [ ck1 ] ) and the calculation of the covariance matrix in the filter equations ( [ kxc ] ) a typical behavior of the magnetic field components as a function of is shown in fig . [ field ] .the main bending component ( ) has a bell - shaped .the components are sizeable away from the central axis ( fig .[ field]b ) .there is clearly no region inside the magnet where the field can be regarded as homogeneous .`` numerical experiments '' with the real track finding procedures have shown that the approximation b for derivatives is accurate enough to be used for the propagation of the covariance matrix ( [ ck1 ] ) in the case of the inhomogeneous magnetic field of hera - b . in approximationb the transport matrix has a rather sparse structure : where , and derivatives are denoted and for a short distance ( _ _ ) the derivatives are obtained in a parabolic expansion as : for a long distance we find the derivatives as the solution of the system ( [ deriveq ] ) ( initial values from ( [ dq0 ] ) ) , together with the `` zero trajectory '' ( [ dxdz ] ) , by the fourth or fifth - order runge - kutta method . the propagation of the covariance matrix in ( [ ck1 ] ) we perform in two steps .first , we define the product of two matrices : where and . then the matrix is multiplied by to obtain the final symmetric matrix : the evaluation of the elements in ( [ u ] ) and ( [ uf ] ) implies 37 multiplications , that is much smaller than the 200 multiplications needed for the case of the completely filled matrix .the product of the matrices and in ( [ kxc ] ) is and for the matrix for the case with the matrix the covariance matrix is given by : only 40 multiplications are sufficient to obtain the matrix in this case and 20 multiplications for the option with the matrix .this has to be compared with 100 multiplications needed for the completely filled matrix .the optimized matrix evaluation was implemented in functions of the _ ranger _ package related to the magnet tracking .the described approach for optimized integration of the equations [ dxdz ] and their derivatives was implemented as a set of c++ functions included in the _ ranger _ package and used for the magnet tracking and the track fit .all functions are of type void . in the following , we list the definition of common parameters of these functions in c++ : .... //// input parameters //double z_in ; // z value for input parameters double p_in[5 ] ; // vector of input track parameter ( x , y , tx , ty , q / p ) double c_in[25 ] ; // covariance matrix of input parameters float error[2 ] ; // desired accuracy in cm //error[0 ] for inner tracker region //error[1 ] for outer tracker region // //output parameters // double z_out ; // z value for output parameters double p_out[5 ] ; // vector of output track parameters double rkd[25 ] ; // derivatives of output parameters with respect // to input //rkd[0 ] deriv . of p_in[0 ] with respect to p_out[0 ] //rkd[1 ] p_in[0 ] p_out[1 ] // . .. //rkd[5 ] p_in[1 ] p_out[0 ] // . .. //rkd[24 ] p_in[4 ] p_out[4 ] double c_out[25];// covariance matrix of output parameters int ierror ; // error flag ( = 0 ok , = 1 particle curls ) .... the definition of the corresponding variables in fortran is : .... c c input parameters c real*8 z_in ! z value for input parameters real*8 p_in(5 ) !vector of input parameters ( x , y , tx , ty , q / p ) real*8 c_in(5,5 ) ! covariance matrix of input parameters real error(2 ) ! desired accuracy in cm c error(1 ) for inner tracker region c error(2 ) for outer tracker region c c output parameters c real*8 z_out ! z value for output parameters real*8 p_out(5 ) !vector of output track parameters real*8 rkd(5,5 ) !rkd(i , j ) derivative of p_in(i ) with respect c to p_out(j ) real*8 c_out(5,5 ) ! covariance matrix of output parameters integer ierror! error flag ( = 0 ok , = 1 particle curls ) .... a typical function call in c++ .... rk4order_(double & z_in , double * p_in , double & z_out , double * p_out , double * rkd , int & ierror ) ; .... invokes function integrating equations for particle parameters and equations for derivatives in the approximation a by a fourth - order runge - kutta method . in fortranthis function can be invoked like subroutine : .... call rk4order(z_in , p_in , z_out , p_out , rkd , ierror ) .... the differences are evident and in following , the call statements in fortran will be mentioned only . a statement .... call rk4fast(z_in , p_in , z_out , p_out , rkd , ierror ) .... effects integration of the equations for the ` zero trajectory ' and calculation of derivatives in the approximation b by a fourth - order runge - kutta method .the function called as : .... call rk1fast(z_in , p_in , z_out , p_out , rkd , ierror ) .... calculates parameters and derivatives in approximation b using a parabolic expansion of the particle trajectory .the function evaluating particle parameters ( approximation a for derivatives ) by a fifth - order runge - kutta method with adaptive step size control is invoked by the statement : .... call rk5order(z_in , p_in , error , z_out , p_out , rkd , ierror ) .... the corresponding function which evaluates the derivatives in approximation b is executed by : .... call rk5fast(z_in , p_in , error , z_out , p_out , rkd , ierror ) .... the function invoked as : .... call rk5numde(z_in , p_in , error , z_out , p_out , rkd , ierror ) .... evaluates the output parameters by a fifth - order runge - kutta method with adaptive step size control and calculates derivatives by ` numerical differentiation ' of the output parameters as a function of the input parameters .the field gradients are not neglected in this case but the function spends by factor of 5 more computing time than rk5fast . the `` fully automatic '' function rktrans transports particle parameters from plane to plane by means of three different methods depending on as it was described earlier .the derivatives are calculated in approximation b. the call statement for this function is : .... call rktrans(z_in , p_in , z_out , p_out , rkd , ierror ) .... the function rktransc uses a similar approach to transport particle parameters and the corresponding covariance matrix . the structure of the derivative matrix in approximation b is fully exploited for maximum speed .the function can be invoked by a statement .... call rktransc(z_in , p_in , c_in , z_out , p_out , c_out , ierror ) ....all described functions were designed for the conditions of the complete geometry of the hera - b detector where typical transport distances are .the limitations of tracking precision coming from multiple scattering and measurement errors in trackers constrain the tracing accuracy required for such distances .the function rk5clip can be used to propagate particle parameters over larger distances with higher accuracy ( approximation a for derivatives ) .the call statement for this function is ....call rk5clip(z_in , p_in , z_out , p_out , rkd , ierror ) .... the following procedure was used for the tuning of the steering parameters .particle parameters were generated with production slopes uniformly distributed from to .each particle was traced by small steps from the target to the area behind the magnet and then traced back using rk5clip .the difference between initial and final particle coordinates was regarded as a measure of computational accuracy .the accuracy for selected steering parameters is shown in the table .as expected , the function rk5clip spends more computing time especially for low momentum ( the dependence is roughly ) ..the computational accuracy of rk5clip for different momenta [ cols="^,^,^ " , ] it should be noted that all described functions do not check if the magnetic field is defined in the region where the particle should be traced .the user himself must make sure that a corresponding function is invoked within the magnetic field and should use linear line propagation in the field - free case .also , an attempt to trace a very slow particle will not be successful because the equations ( 1 ) do not describe a particle curling in the magnetic field . an additional function , called as : .... real zmin , zmax ! in cm . . .call rkzfield(zmin , zmax ) .... returns as output the lower and upper bounds zmin , zmax ( with respect to the center of the magnet ) of the region where the field is defined by routines gufld , utfeld .this region ( roughly from the center of the magnet ) includes the area where we have field measurements or at least results from the mafia calculation . at the edges of the region the magnetic fieldcan be neglected so that simple linear line searches and fits are sufficient for pattern recognition and track fitting .this can save computing time for event generation and reconstruction .therefore during the event simulation in hbgean the procedure for particle tracing in the magnetic field is invoked only when the particle is within the geant volume magn .the definition of this volume can be found in the _ arte _ table gesl for the detector component magnet .note that this volume is not identical with the region of non - zero field as defined by gufld , utfeld .i am greatly indebted to rainer mankel , the author of the _ ranger _ package , for sharing with me his experience in the field of track finding and knowledge about _ranger_. i would like to thank him for the careful reading of the manuscript and the useful advices .i am indebted to g.bohm and h.kolanoski for the discussion of the revised version .i am very much grateful to desy zeuthen for the kind hospitality during my visit .
|
in this note we present a flexible approach to perform the propagation of track parameters and their derivatives in an inhomogeneous magnetic field , keeping the computational effort small . we discuss also a kalman filter implementation using this optimized computation of the derivatives . = 159 mm = 239 mm = -11 mm = 2 mm = 2 mm
|
the currents in each edge of an electrical circuit , which is composed of linear elements ( i.e. , resistance , capacitance , and inductance ) and where conservation of charge at each node is granted , are generally found by solving kirchhoff s equations . in particular , for resistor networks , the solution for the currents at each edge is related to random walks in graphs , first - passage times , finding shortest - paths and community structures on weighted networks , and network topology spectral characteristics . though the relationship between currents and voltage differences in network circuits with linear elements follows ohm s law , their modelling capability is enormous .for example , it is used to model fractures in materials , biologically inspired transport networks , airplane traffic networks , robot path planing , queueing systems , etc . in practice ,resistor networks are used in various electronic designs , such as current or voltage dividers , current amplifiers , digital to analogue converters , etc .these devices are usually inexpensive , relatively easy to manufacture , and require little precision on the constituents . in order to solve the voltages across these networks ,two methods are broadly used : nodal analysis and mesh analysis . in the former ,nodes are labelled arbitrarily and voltages are set by using the kirchhoff s current equations of the system . in thelater , loops are defined with an assigned current which do not contain any inner loop , then the kirchhoff s voltage equations are solved .these constitute classic techniques of circuit theory .however , nodal and mesh methods ( or even transfer function methods ) become inefficient to recalculate the voltage drops across the network if the location of inputs and outputs changes constantly , e.g. , if the cathode and/or anode of a voltage generator are moved from one node of the network to another .this switching situation is common in the modelling of the modern power - grid as an impedance network circuit or in general supply - demand networks .an example of this case is shown in fig .[ fig_1 ] for a resistor network with a single source - sink nodes .another redistribution of currents , which is also poorly accounted by these methods , happens if a single source node and single sink node are decentralized for multiple source and/or multiple sink nodes that preserve the initial input and output magnitudes . in any case , either of the classical circuit theory methods requires to be applied for each configuration of the sources and sinks in order to find the currents at every edge of the network .in this work , we present novel general analytical solutions for current conservative dc / ac circuit networks with resistive , capacitive , and/or inductive edge characteristics .the novelty comes from expressing the currents and voltage drops in terms of the eigenvalues and eigenvectors of the admittance ( namely , the inverse of the edge impedance ) laplacian matrix of the circuit network . in order to derive our novel solutionswe assume that the impedance values at every edge and the location of the source / sink nodes are known .our solutions give the exact dc / ac currents that each edge of the circuit holds and are identical in magnitude to the ones found from nodal circuit theory analysis .the practicality of our solutions comes from , allowing to compute the equivalent impedance between any two nodes of the network directly and allowing to easily calculate the redistribution of currents that happens when the location of sources and sinks is changed within the network ( such as in the example of fig .[ fig_1 ] ) .the scientific interest of our solutions comes from , establishing a clear relationship between the currents and voltages in dc / ac circuits with the topology invariants of the network , namely , its eigenvalues and eigenvectors . *( a ) * at a source node and a single output current of at a sink node .panel * ( b ) * shows the same resistor network but with multiple inputs ( nodes ) and outputs ( nodes ) which add to the same inflow / outflow magnitudes than in panel * ( a)*. changing the system from panel * ( a ) * to panel * ( b ) * , or vice - versa , generates a global redistribution of currents.,title="fig : " ] + * ( b ) * at a source node and a single output current of at a sink node .panel * ( b ) * shows the same resistor network but with multiple inputs ( nodes ) and outputs ( nodes ) which add to the same inflow / outflow magnitudes than in panel * ( a)*. changing the system from panel * ( a ) * to panel * ( b ) * , or vice - versa , generates a global redistribution of currents.,title="fig : " ] the approach we develop provides new analytical insight into the transmission flow problem and exhibits different features than other available solutions .moreover , it provides a new tool to achieve the voltage / current solutions and to analyse resonant behaviour in linear circuits . as a practical application, we relate these solutions to closed circuits where a voltage generator is present ( instead of having open sources / sinks that feed current to the network ) and solve a simple network where we can compare our solutions to the ones provided by solving directly kirchhoff s nodal equations .the model we solve corresponds to a conservative circuit network with known input / output net currents and obeys ohm s and kirchhoff s law for conservation of charge .we assume that the input ( output ) net current ( ) at the source ( sink ) node ( ) , its frequency ( with for dc currents and for ac currents ) , and their phases are known .the extension to various input or output nodes is done in appendix [ anex_3 ] .ohm s law linearly relates the current at an edge of the circuit with the voltage difference between the nodes that the edge connects .specifically , where is the current passing from node to node given a current source located at node and a sink node located at node , is the voltage difference , and is the impedance of the symmetric edge . depends on the edge s resistive , capacitive , and/or inductive properties , and the network topological properties of the circuit . the variables in eq .( [ eq_ohm ] ) are complex numbers in the case of ac input / output currents and real numbers for dc currents . in general , a resonance in the -edge appears for a minimum of the impedance , namely , when the input / output frequency is tuned to a frequency related to the natural frequency of the edge line .for example , in the case that the edge is modelled by a series circuit , the impedance of the edge is = z_{lk}\ , , \label{eq_z_rlc}\ ] ] where is the natural frequency of the edge , is the dissipation of the edge , is the edge s inductance , is the edge s capacitance , is the edge s resistance , and . in this case, a resonance in the -edge appears if .consequently , our solutions are valid as long as the input / output frequency is different from any of the resonant frequencies associated to the ] is the corresponding -th eigenvector coordinate ( with ) . with the exception of ( assuming the phase difference between the net input and output flows is null , which guarantees global charge conservation ) , the remaining quantities are complex numbers ,hence , they have an amplitude and a phase , and the indicates complex conjugation .thus , \,\frac { e^{j\,\phi_{kl}^{(st ) } } } { \lambda_n(\mathbf{g } ) } \ , , \label{eq_volt_diff_comp}\ ] ] where [ is the real [ imaginary ] part of the product - \left[\vec{v}_n\right]_l \right ) \left ( \left [ \vec{v}_n\right]_s^\star - \left[\vec{v}_n\right]_t^\star \right) ] in every edge , where is the characteristic frequency of each edge and is the input frequency ( , for every time ) .then , the admittance laplacian matrix entries from eq .( [ eq_model ] ) are given by the inverse of the impedance ( admittance ) is given by where if node is connected to node , otherwise , and .the resultant admittance laplacian matrix in this case is with ( ) being the real ( imaginary ) part of the entries in eq .( [ eq_prob_lap_complex ] ) and the laplacian matrix from eq .( [ eq_prob_lap ] ) .consequently , the eigenvalues of are simply the eigenvalues of divided by the impedance and these matrices share the same eigenvectors . in this case ( the square circuit with identical impedances for its edges ) , the ac current flowing between nodes and is - \left[\vec{v}_2\right]_a \right ) \frac{\left| z \right| } { 2\,e^{-j\,\varphi } } \left ( \left[\vec{v}_2\right]_s - \left[\vec{v}_2\right]_t \right)\,,\ ] ] which is the same result as in the equal resistances dc case [ eq . ( [ eq_current_sol ] ) ] for the modulus because .furthermore , the analogy is further seen when calculating the equivalent impedance between the source ( ) and sink ( ) nodes using eq .( [ eq_equiv_impedance ] ) .this results in - \left[\vec{v}_2\right]_t \right|^2 } { 2/z } = z\ , .\label{eq_equiv_sol}\ ] ] the solution is identical to the one that circuit theory derives and is in direct correlation with the dc problem as expected . in more general scenarios ,the relationship between the dc and ac circuit is not direct . in such situations ,the complex entries of the laplacian matrix for the ac case are not related to the dc laplacian matrix .hence , further assumptions need to be done to find analytical solutions . for instance, one could have to impose that the input frequency to be larger than the natural frequencies of the lines ( for every edge ) , such that the imaginary part of the laplacian be positive semi - defined ( see appendix [ anex_1 ] ) .the approach we develop provides new analytical insight into the transmission flow problem and exhibits different features than other available solutions .moreover , it provides a new tool to achieve the voltage / current solutions and to analyse resonant behaviour in linear circuits . as a practical application, we relate these solutions to closed circuits where a voltage generator is present ( instead of having open sources / sinks that feed current to the network ) and solve a simple network where we can compare our solutions to the ones provided by solving directly kirchhoff s equations .our findings help to solve problems , where the input and output nodes change in time within the network , more effectively than classical circuit theory techniques .the weighted laplacian matrix of the circuit network with edge properties given by the symmetric line impedances has the following complex value entries hence , which is the first requirement for a laplacian matrix : the zero row sum property .the eigenspace of is composed of a set of complex eigenvalues and eigenvectors with , such that thus , where is a unitary matrix ( , being the identity matrix ) of eigenvectors and is a diagonal matrix of eigenvalues ( ) . due to( [ eq_prop1 ] ) , has a null eigenvalue ( referred to as in the following ) associated to the kernel vector , where . using eq .( [ eq_prop1 ] ) , .hence , the kernel of the matrix ( the space of eigenvectors associated to the null eigenvalues ) is at least of dimension and direct inversion of the matrix is not possible .this is the second property of a laplacian matrix , which implies that the rank of the matrix is less than .the third property is that laplacian matrices are positive semi - defined . in particular , for any column vector , the dirichlet sum is such that where `` '' is the inner product operation and is the weighted adjacency matrix of the circuit network .this inequality holds only if for all and . as a consequence, it implies that all eigenvalues are non - negative , because can be any of the eigenvectors . in that case , where the last equality is possible because of the unitary property of the eigenvectors ( [ \vec{v}_m]_k^\star = \delta_{nm} ] for every spanning eigenvector ( ) .finally , {kp } = \sum_{l = 1}^n \left ( \sum_{n = 1}^{n-1 } \left[\vec{v}_n\right]_k \frac{1}{\lambda_n } \left[\vec{v}_n\right]_l^\star \sum_{m = 1}^{n-1 } \left[\vec{v}_m\right]_l \lambda_m \left[\vec{v}_m\right]_p^\star \right)\ ] ] \frac{1}{\lambda_n } \left ( \sum_{l = 1}^n \left[\vec{v}_n\right]_l^\star \left[\vec{v}_m\right]_l \right ) \lambda_m \left[\vec{v}_m\right]_p^\star\,,\ ] ] where , using eq .( [ eq_indep ] ) , it results in {kp } = \sum_{n = 1}^{n-1 } \left[\vec{v}_n\right]_k \left[\vec{v}_n\right]_p^\star\ ,. \label{eq_prod}\ ] ] now , observing that eq .( [ eq_gen ] ) can be written as \left[\vec{v}_n\right]_p^{\star } = \delta_{kp}\,,\ ] ] then , eq .( [ eq_prod ] ) is further simplified {kp } = \delta_{kp } - \frac{1}{n } = \left[\mathbf{i } \right]_{kp } - \frac{\left[\mathbf{j}\right]_{kp}}{n}\ , .\label{eq_prod_xg}\ ] ] consequently , we have shown that returning to eq .( [ eq_new_model ] ) , and using eq .( [ eq_inverse ] ) , we obtain the voltage potentials at each node where we use that and .if global conservation of charge is granted , namely , if the input current equals the output current in phase and magnitude , then , .otherwise , is a vector with all the elements equal to the magnitude and/or phase difference between the input and output net currents [ see eq .( [ eq_netflow ] ) ] .we note that the role of in eq .( [ eq_volt_sol ] ) is to add an arbitrary constant to the node voltage potential .this is easily interpreted as the arbitrary energy reference point .such arbitrary value is eliminated once voltage differences are calculated .moreover , voltage differences eliminate also the possible constant value given by .consequently , the voltage difference between two arbitrary nodes and in the network is given by - \left [ \mathbf{x}\,\vec{f}^{(st ) } \right]_l\ , .\label{eq_volt_diff_sol}\ ] ] thus , {ks } - \left[\mathbf{x } \right]_{ls } \,\right ) - f^{out}\left(\ , \left[\mathbf{x}\right]_{kt } - \left[\mathbf{x } \right]_{lt } \,\right)\ ] ] {ks } - \left[\mathbf{x } \right]_{ls } \,\right ) - \left(\ , \left[\mathbf{x}\right]_{kt } - \left[\mathbf{x } \right]_{lt } \,\right ) \right]\ ] ] - \left[\vec{v}_n\right]_l \right)\frac{1}{\lambda_n } \left ( \left[\vec{v}_n\right]_s^\star - \left[\vec{v}_n\right]_t^\star \right ) \right].\ ] ]in order to analyse how eq .( [ eq_volt_diff_sol ] ) changes when many sources and sinks are present , we need to rewrite eq .( [ eq_netflow ] ) to include the new sources of inflow and sinks of outflow .thus , in general , the net current at a node is where ( ) is the set of nodes that act as sources ( sinks ) and ( ) is the fraction of the total inflow ( outflow ) that goes through node ( ) , namely , ( ) .consequently , global conservation of charge is granted .substituting eq .( [ eq_gen_flows ] ) into eq .( [ eq_volt_diff_sol ] ) , the voltage difference between nodes and in the circuit network with multiple sources and sinks is - \left[\vec{v}_n\right]_l\right)\times\frac{1}{\lambda_n}\times \right . \\ \left.\;\;\;\;\;\;\times\left ( \sum_{s\in\mathcal{v}_s } a_s\,\left[\vec{v}_n \right]_s^\star - \sum_{t\in\mathcal{v}_t } b_t\,\left[\vec{v}_n\right]_t^\star\right ) \right ] .\label{eq_new_volt_diff_sol}\end{aligned}\ ] ] * ( a ) * ( nodes ) and output ( nodes ) currents .panel * ( b ) * shows the same resistor network but as a closed system containing a voltage generator and new resistors . these supply the input ( output ) currents at nodes ( ) via the new resistors ( ) with an identical magnitude as in panel * ( a)*.,title="fig : " ] + * ( b ) * ( nodes ) and output ( nodes ) currents .panel * ( b ) * shows the same resistor network but as a closed system containing a voltage generator and new resistors .these supply the input ( output ) currents at nodes ( ) via the new resistors ( ) with an identical magnitude as in panel * ( a)*.,title="fig : " ] when multiple sources and sinks exist [ e.g. , panel * ( a ) * in fig .[ fig_1_anex ] ] , then the transformation of the problem to a closed circuit problem requires the inclusion of a single super source node and super sink node need to be created [ panel * ( b ) * ] .all original source ( sink ) nodes are then connected to the new super source ( sink ) node that provides the total input ( output ) that the multiple sources ( sinks ) were feeding ( consuming ) in the original system , namely , ( ) .consequently , the multiple source - sink configuration in is transformed into a single - pair configuration of a new network that has nodes more than the original network . in such conditions, the former process enables to analyse the new network setting by means of a single generator that connects these two new nodes . in other words ,once a super source ( sink ) that connects to all the original sources ( sinks ) is defined , then a laplace problem can be defined by setting a voltage generator which provides where the equivalent resistance between the super source and super sink is unknown .this is because the impedance ( resistance ) values for the new edge connections between the multiple sources ( sinks ) to the super source [ which have to be set such that the current entering the network circuit through the old multiple sources ( sinks ) is identical to the one the particular source ( sink ) supplies ( consumes ) , e.g. , as in panel * ( b ) * of fig .[ fig_1_anex ] ] are unknown . in order to determine the impedances ( resistances ) of the edges connecting the super source ( sink ) to the multiple source ( sink ) nodes , we observe that : where neither the voltages nor the resistances are known .nevertheless , the voltages of the super nodes fulfil eq .( [ eq_multiple_s - t_transf ] ) , thus , arbitrarily setting the unknown resistances for the new edges to unity , can be derived and the node voltages for each of the multiple sources and sinks calculated .that is , authors thank the scottish university physics alliance ( supa ) .10 p. r. clayton , _ fundamentals of electric circuit analysis _ ( wiley , 2001 ) .f. r. k. chung , _ spectral graph theory _( american mathematical soc . and cbms * 92 * , 1997 ) .d. randall , `` rapidly mixing markov chains with applications in computer science and physics '' , _ computing in science & engineering _ , vol .2 , pp 30 - 41 , 2006 .m. e. j. newman and m. girvan , `` finding and evaluating community structure in networks '' , _ phys .69 , no . 026113 , 2004 . n. rubido , c. grebogi , and m. s. baptista , `` structure and function in flow networks '' , _ europhys ._ , vol . 101 , no .68001 , 2013 . n. rubido , c. grebogi , and m. s. baptista , `` resiliently evolving supply - demand networks '' , _ phys .e _ , vol .89 , no . 012801 , 2014 . g. g. batrouni , and a. hansen , `` fracture in three - dimensional fuse networks '' , _ phys .80 , no . 325 , 1998 .e. katifori , g. j. szollosi , and m. o. magnasco , `` damage and fluctuations induce loops in optimal transport networks '' , _ phys .104 , no . 048704 , 2010 .a. cardillo , m. zanin , j. gmez - gardees , m. romance , a. j. garca del amo , and s. boccaletti , `` modeling the multi - layer nature of the european air transport network : resilience and passengers re - scheduling under random failures '' , _ eur .j. special topics _ , vol .215 , pp 23 - 33 , 2013 .z. liu , s. pang , s. gong , and p. yang , `` robot path planning in impedance networks '' , _ proc . of 6th world congress on intelligent control and automation _ , vol .2 , pp 9109 - 9113 , 2006 .m. haenggi , `` analogy between data networks and electronic networks '' , _ electronic letters _ ,12 , pp 553 - 554 , 2002 .a. hajimiri , `` generalized time- and transfer - constant circuit analysis '' , _ ieee trans .circuits syst .i : regular papers _ , vol . 57 , no . 6 , pp 1105 - 1121 , 2010 .r. jakushokas and e. g. friedman , `` power network optimization based on link breaking methodology '' , _ ieee trans . on very large scale int .( vlsi ) syst .5 , pp 983 - 987 , 2013. j. cserti , `` application of the lattice green s function for calculating the resistance of infinite networks of resistors '' , _ am .68 , no . 10 , pp 896 - 906 , 2000 .f. y. wu , `` theory of resistor networks : the two - point resistance '' , _ j. phys . a : math .37 , pp 6653 - 6673 , 2004 .j. zheng jiang and m. c. smith , `` series - parallel six - element synthesis of biquadratic impedances '' , _ ieee trans .circuits syst .i : regular papers _ , vol. 59 , no .11 , pp 2543 - 2554 . 2012 .f. g. s. silva , r. n. de lima , r. c. s. freire , and c. plett , `` a switchless multiband impedance matching technique based on multiresonant circuits '' , _ ieee trans .circuits syst .ii : express briefs _ , vol .60 , no . 7 , pp 417 - 421 , 2013 .
|
in this work , we present novel general analytical solutions for the currents that are developed in the edges of network - like circuits when some nodes of the network act as sources / sinks of dc or ac current . we assume that ohm s law is valid at every edge and that charge at every node is conserved ( with the exception of the source / sink nodes ) . the resistive , capacitive , and/or inductive properties of the lines in the circuit define a complex network structure with given impedances for each edge . our solution for the currents at each edge is derived in terms of the eigenvalues and eigenvectors of the laplacian matrix of the network defined from the impedances . this derivation also allows us to compute the equivalent impedance between any two nodes of the circuit and relate it to currents in a closed circuit which has a single voltage generator instead of many input / output source / sink nodes . contrary to solving kirchhoff s equations , our derivation allows to easily calculate the redistribution of currents that occurs when the location of sources and sinks changes within the network . finally , we show that our solutions are identical to the ones found from circuit theory node analysis .
|
simulation of dynamic fracture is a challenging problem because of the extremes of strain and strain - rate experienced by the material near a crack tip , and because of the inherent instabilities such as branching that characterize many applications .these considerations , as well as the incompatibility of partial differential equations ( pdes ) with discontinuities , have led to the formulation of specialized methods for the simulation of crack growth , especially in finite element analysis .these techniques include the extended finite element , , cohesive element , and phase field , , methods and have met with notable successes .the peridynamic theory of solid mechanics has been proposed as a generalization of the standard theory of solid mechanics that predicts the creation and growth of cracks . in this formulation crack dynamics is given directly by evolution equations for the deformation field eliminating the need for supplemental kinetic relations describing crack growth .the balance of linear momentum takes the form where is a neighborhood of , is the density , is the displacement field , is the body force density field , and is a material - dependent function that represents the force density ( per unit volume squared ) that point exerts on as a result of the deformation .the radius of the neighborhood is referred to as the _the motivation for peridynamics is that all material points are subject to the same basic field equations , whether on or off of a discontinuity ; the equations also have a basis in non - equilibrium statistical mechanics .this paradigm , to the extent that it is successful , liberates analysts from the need to develop and implement supplementary equations that dictate the evolution of discontinuities .standard practice in peridynamics dictates that the nucleation and propagation of cracks requires the specification of a damage variable within the functional form of that irreversibly degrades or eliminates the pairwise force interaction between and its neighbor .this is referred to as breaking the _ bond _ between and . herethe term `` bond '' is used only to indicate a force interaction between two material points and through some potential , whose value can depend on the deformations of other bonds as well . a wide variety of damage laws in peridynamics are possible , and often they contain parameters that can be calibrated to important experimental measurements such as critical energy release rate or the eshelby - rice -integral .damage evolution in peridynamic mechanics can be cast in a consistent thermodynamic framework , including appropriate restrictions derived from the second law of thermodynamics .this general approach of using bond damage has met with notable successes in the simulation of dynamic fracture . however , because of the large number of bonds in a discrete formulation of , there is a cost associated with keeping track of bond damage , as well as the need to specify a bond damage evolution law . in the present paper, we report on recent efforts to model cracks in peridynamics without a bond damage variable .the main innovation in the present paper is a nonconvex elastic material model for peridynamic mechanics that , under certain conditions , nucleates and evolves discontinuities spontaneously .this approach is rigorously shown to reproduce the most salient experimentally observed characteristic of brittle fracture the nearly constant amount of energy consumed by a crack per unit area of crack growth ( the griffith crack model ) .our results further show that in spite of the strong nonlinearity of the material model , the resulting equation of motion is well - posed within a suitable function space , providing a mathematical context for which multiple interacting cracks can grow without recourse to supplemental kinetic relations . in the limit of small horizon ,the nonconvex peridynamic model recovers a limiting fracture evolution characterized by the classical pde of linear elasticity away from the cracks . the evolving fracture system for the limit dynamics is shown to have bounded griffith fracture energy described by a critical energy release rate obtained directly from the nonconvex peridynamic potential .these results bring the field of peridynamic mechanics closer to the goal of generalizing the conventional theory to model both continuous and discontinuous deformation using the same balance laws .let denote the _ bond strain _ , defined to be the change in the length of a bond as a result of deformation divided by its initial length .we assume that the displacements are small ( infinitesimal ) relative to the size of the body . under this hypothesisthe strain between two points and under the displacement field is given by where is the unit vector in the direction of the bond and is the dot product between two vectors .to describe the material response , assume that the force interaction between points and reversibly stores potential ( elastic ) energy , and that this energy depends only on the bond strain and the bond s undeformed length .the elastic energy density at a material point is assumed to be given by where is the pairwise force potential per unit length between and and is the area ( in 2d ) or the volume ( in 3d ) of the neighborhood .the nonconvexity of the potential with respect to the strain distinguishes this material model from those previously considered in the peridynamic literature . by hamiltons principle applied to a bounded body , , , the equation of motion describing the displacement field is which is a special case of .the evolution described by is investigated in detail in the papers .we assume the general form where is a weight function and is a continuously differentiable function such that , , and .the pairwise force density is then given by for fixed and , there is a unique maximum in the curve of force versus strain ( figure 1 ) .the location of this maximum can depend on the distance between and and occurs at the bond strain such that .this value is , where is the unique number such that .[ fig - nonconvex ] ( 0,2 ) ( 0,-2 ) ; ( -3.5,0 ) ( 3.5,0 ) ; ( -3.5,-0.07 ) to [ out=-25,in=180 ] ( -1.5,-1.5 ) to [ out=0,in=180 ] ( 1.5,1.5 ) to [ out=0,in=165 ] ( 3.5,0.07 ) ; ( 1.5,-0.2 ) ( 1.5 , 0.2 ) ; ( -1.5,-0.2 ) ( -1.5 , 0.2 ) ; at ( 1.5,-0.2 ) ; at ( -1.5,-0.2 ) ; at ( 3.5,0 ) ; at ( 0,2.0 ) ; we introduce , the maximum value of bond strain relative to the critical strain among all bonds connected to : the fracture energy associated with a crack is stored in the bonds corresponding to points for which .it is associated with bonds so far out on the the curve in figure 1 that they sustain negligible force density .this set contains the _ jump set _ , along which the displacement has jump discontinuities . consider an initial value problem for the body with bounded initial displacement field , bounded initial velocity field , and a non - local dirichlet condition for within a layer of thickness external to containing the domain boundary .the initial displacement can contain a jump set associated with an initial network of cracks .this initial value problem for is well posed provided we frame the problem in the space of square integrable displacements satisfying the nonlocal dirichlet boundary conditions .this space is written .the body force is prescribed for and belongs to ;l^2_0(d;\mathbb{r}^d)) ] taking on the intial data , .normally , we expect an elastic spring to `` harden , '' that is , force increases with strain .if instead the spring `` softens '' and the force decreases , then it is unstable : under constant load , its extension will tend to grow without bound over time .a material model of the type shown in figure 1 has this type of softening behavior for sufficiently large strains . yet the instability of a bond between a _single _ pair of points and does not necessarily imply that the entire body is dynamically unstable . here , we present a condition on the material stability with regard to the growth of infinitesimal jumps in displacement across surfaces .let denote the volume fraction of points such that .. ] we apply a linear perturbation analysis of to show that small scale jump discontinuities in the displacement can become unstable and grow under certain conditions . consider a time independent body force density and a smooth solution of .let be a fixed point in .we investigate the evolution of a small jump in displacement of the form where is a vector , is a scalar function of time , and is a unit vector .geometrically , the surface of discontinuity passes through and has normal .the vector gives the direction of motion of points on either side of the surface as they separate .we give conditions for which the jump perturbation is exponentially unstable .the _ stability tensor _ is defined by where .a sufficient condition for the rapid growth of small jump discontinuity is derived in .if the stability matrix has at least one negative eigenvalue then ( 1 ) , and ( 2 ) there exist a non - null vector and a unit vector such that grows exponentially in time .the significance of this result is that the nonconvex bond strain energy model can spontaneously nucleate cracks without the assistance of supplemental criteria for crack nucleation .this is an advantage over conventional approaches because crack initiation is predicted by the fundamental equations that govern the motion of material particles .a negative eigenvalue of can occur only if a sufficient fraction of the bonds connected to have strains .for finite horizon the elastic moduli and critical energy release rate are recovered directly from the strain potential given by .first suppose the displacement inside is affine , that is , where is a constant matrix . for small strains , i.e. , , the strain potential is linear elastic to leading order and characterized by elastic moduli and associated with a linear elastic isotropic material the elastic moduli and calculated directly from the strain energy density and are given by where the constant for dimensions . in regions of discontinuity the same strain potential is used to calculate the amount of energy consumed by a crack per unit area of crack growth , i.e. , the critical energy release rate .calculation applied to shows that equals the work necessary to eliminate force interaction on either side of a fracture surface per unit fracture area and is given in three dimensions by where .( see figure 2 for an explanation of this computation . ) in dimensions , the result is where is the volume of the dimensional unit ball , . .for each point along the dashed line , , the work required to break the interaction between and in the spherical cap is summed up in using spherical coordinates centered at , which depends on .*,title="fig : " ] [ fig - gintegral ] in the limit of small horizon peridynamic solutions converge in mean square to limit solutions that are linear elastodynamic off the crack set , that is , the pdes of the local theory hold at points off of the crack .the elastodynamic balance laws are characterized by elastic moduli , .the evolving crack set possesses bounded griffith surface free energy associated with the critical energy release rate .we prescribe a small initial displacement field and small initial velocity field with bounded griffith fracture energy given by for some . here is the initial crack set across which the displacement has a jump discontinuity .this jump set need not be geometrically simple ; it can be a complex network of cracks . is the dimensional hausdorff measure of the jump set .this agrees with the total surface area ( length ) of the crack network for sufficently regular cracks for .the strain tensor associated with the initial displacement is denoted by .consider the sequence of solutions of the initial value problem associated with progressively smaller peridynamic horizons .the peridynamic evolutions converge in mean square uniformly in time to a limit evolution in ;l_0 ^ 2(d,\mathbb{r}^d ) ] with the same initial data , i.e. , see , .it is found that the limit evolution has bounded griffith surface energy and elastic energy given by for , where denotes the evolving fracture surface inside the domain , , .the limit evolution is found to lie in the space of functions of bounded deformation sbd , see . for functions in sbdthe bond strain defined by is related to the strain tensor by for almost every in .the jump set is the countable union of rectifiable surfaces ( arcs ) for , see . in domains away from the crackset the limit evolution satisfies local linear elastodynamics ( the pdes of the standard theory of solid mechanics ) .fix a tolerance .if for subdomains and for times the associated strains satisfy for every then it is found that the limit evolution is governed by the pde \times d'$ } , \label{waveequationn}\end{aligned}\ ] ] where the stress tensor is given by is the identity on , and is the trace of the strain ( see ) .( see for a similar conclusion associated with an alternative set of hypotheses . )the convergence of the peridynamic equation of motion to the local linear elastodynamic equation away from the crack set is consistent with the convergence of peridynamic equation of motion for _ convex _ peridynamic potentials as seen in , , .( b ) bond strain relative to the onset of instability near the crack tip .( c ) energy consumed by the crack as a function of crack length . ]we present two example problems that demonstrate ( 1 ) that the nonlocal elastodynamic model reproduces a constant , prescribed value of , and ( 2 ) the model predicts reasonable behavior for the nucleation and propagation of complex patterns of brittle dynamic fracture . in the first example, a 0.1 m 0.1 m plate with unit thickness has a material model of the form with and where and are positive constants .these constants are determined so that the bulk modulus and the critical energy release rate are =25gpa , =500jm .the density is =1200kg - m the maximum in the bond force curve occurs at .an initial edge crack of length 0.02 m extends vertically from the midpoint of the lower boundary ( figure 3 ) .a strip of thickness along the lower boundary is subjected to a constant velocity condition m/s , causing the crack to grow .the solution method described in is used with a 400 400 square grid of nodes .the horizon is =0.00075 m .figure 3(a ) shows contours of displacement after the crack has grown halfway through the plate .the crack has a limiting growth velocity of about 1400 m/s , which is about 50% of the shear wave speed .figure 3(b ) shows a close - up view of the growing crack tip with the colors indicating . near the crack tip ,the green lobes indicate a process zone in which the material goes through a neutrally stable phase as bonds approach the maximum of the force vs. bond strain curve .figure 3(c ) illustrates the energy balance in the model .the curve labeled `` griffith '' represents the idealized result under the assumption that the crack uses a constant amount of energy per unit distance as it grows .the curve labeled `` peridynamic '' is the energy that is stored in bonds in the computational model that have , that is , bonds that are so far out on the curve in figure 1 that they sustain negligible force density .the fracture energy is stored in these bonds . as shown in figure 3(c ) , the energy consumed by the crack in the numerical model closely approximates what is expected for a griffith crack .we conjecture that the small difference between the griffith and peridynamic curves is due to numerical dissipation . in the second numerical example , the same material as above occupies a 0.2 m 0.1 m rectangle in the plane and has a semicircular notch as shown in figure 4 .the material has an initial velocity field =40m - s , =-13.3m - s throughout , where the and coordinates are in the horizonal and vertical directions , respectively .the rectangular region has constant velocity boundary conditions on the left and right boundaries that are consistent with the initial velocity field .as time progresses , the strain concentration near the notch causes some bonds to exceed .the resulting material instability nucleates cracks at the notch that rapidly accelerate and branch .the points associated are illustrated in figure 4 and correspond to the crack paths .many microbranches are visible in the crack paths . for most of these microbranches ,the strain energy is not sufficient to sustain growth , and they arrest .such microbranches are frequently seen in experiments on dynamic brittle fracture , for example .in this article we describe a theoretical and computational framework for analysis of complex brittle fracture based upon newtons second law .this is enabled by recent advances in nonlocal continuum mechanics that treat singularities such as cracks according to the same field equations and material model as points away from cracks .this approach is different from other contemporary approaches that involve the use of a phase field or cohesive zone elements to represent the fracture set , see .the key aspect of the elastic peridynamic material model that leads to crack growth is the nonconvexity of the bond energy density function . in the classical theory of solid mechanics , nonconvex strain energy densitiesare related to the emergence of features such as martensitic phase boundaries and crystal twinning associated with the loss of ellipticity , a type of material instability . as shown in the present paper , nonconvexity in peridynamic mechanicsleads to crack nucleation and growth through an analogous material instability within the nonlocal mathematical description .this work was partially supported by nsf grant dms-1211066 , afosr grant fa9550 - 05 - 0008 , and nsf epscor cooperative agreement no .eps-1003897 with additional support from the louisiana board of regents ( to r.l . ) .sandia national laboratories is a multi - program laboratory operated by sandia corporation , a wholly owned subsidiary of lockheed martin corporation , for the u.s .department of energy s national nuclear security administration under contract de - ac04 - 94al85000 .99 l. ambrosio , a. coscia , and g. dal maso . _ fine properties of functions with bounded deformation . _archive for rational mechanics and analysis 139 , ( 1997 ) , pp .b. bourdin , c. larsen , c. richardson . _ a time - discrete model for dynamic fracture based on crack regularization . _ international journal of fracture 168 , ( 2011 ) , pp .m. borden , c. verhoosel , m. scott , t. hughes , and c. landis . _ a phase - field description of dynamic brittle fracture ._ computer methods in applied mechanics and engineering 217 , ( 2012 ) , pp .duarte , o.n .hamzeh , t.j .liszka , and w.w .tworzydlo . _ a generalized finite element method for the simulation of three - dimensional dynamic crack propagation ._ computer methods in applied mechanics and engineering 190 , ( 2001 ) , pp. 22272262 .m. elices , g. v. guinea , j. gmez , and j. planas . _ the cohesive zone model : advantages , limitations , and challenges . _engineering fracture mechanics , 69 ( 2002 ) , pp .e. emmrich and o. weckner ._ on the well - posedness of the linear peridynamic model and its convergence towards the navier equation of linear elasticity. _ communications in mathematical sciences , 4 ( 2007 ) , pp . 851864 .j. fineberg and m. marder ._ instability in dynamic fracture ._ physics reports , 313 ( 1999 ) , pp .foster , s.a .silling , and w. chen . _ an energy based failure criterion for use with peridynamic states ._ journal for multiscale computational engineering , 9 ( 2011 ) , pp .y.d . ha , and f. bobaru ._ studies of dynamic crack propagation and crack branching with peridynamics . _international journal of fracture , 162 ( 2010 ) , pp . 229244 .w. hu , y.d .ha , f. bobaru , and s.a . silling . _ the formulation and computation of the nonlocal j - integral in bond - based peridynamics . _international journal of fracture , 176 ( 2012 ) , pp .larsen , c. ortner , and e. suli ._ existence of solutions to a regularized model of dynamic fracture ._ mathematical models and methods in applied sciences 20 , ( 2010 ) , pp .10211048 . c. miehe , m. hofacker , and f. welschinger ._ a phase field model for rate - independent crack propagation : robust algorithmic implementation based on operator splits ._ computer methods in applied mechanics and engineering 199 , ( 2010 ) , pp . 27652778 .. silling . _reformulation of elasticity theory for discontinuities and long - range forces ._ journal of the mechanics and physics of solids , 48 ( 2000 ) , pp .silling and e. askari . _ a meshfree method based on the peridynamic model of solid mechanics . _ computers and structures , 83 ( 2005 ) , pp .silling , m. epton , o. weckner , j. xu , and e. askari . _ peridynamic states and constitutive modeling ._ journal of elasticity , 88 ( 2007 ) , pp .silling and r.b ._ convergence of peridynamics to classical elasticity theory ._ journal of elasticity , 93 ( 2008 ) , pp .silling and r.b .lehoucq . _ peridynamic theory of solid mechanics ._ advances in applied mechanics , 44 ( 2010 ) , pp . 73166 .
|
a mechanical model is introduced for predicting the initiation and evolution of complex fracture patterns without the need for a damage variable or law . the model , a continuum variant of newton s second law , uses integral rather than partial differential operators where the region of integration is over finite domain . the force interaction is derived from a novel nonconvex strain energy density function , resulting in a nonmonotonic material model . the resulting equation of motion is proved to be mathematically well - posed . the model has the capacity to simulate nucleation and growth of multiple , mutually interacting dynamic fractures . in the limit of zero region of integration , the model reproduces the classic griffith model of brittle fracture . the simplicity of the formulation avoids the need for supplemental kinetic relations that dictate crack growth or the need for an explicit damage evolution law . key words : brittle fracture , peridynamic , nonlocal , material stability , elastic moduli
|
the search for the most knowledgeable people in some specific area , with basis on documents describing people s activities , is a challenging problem that has been receiving highly attention in the information retrieval community. usually referred to as expert finding , the task involves taking a user query as input , with a topic of interest , and returns a list of people ordered by their level of expertise towards the query topic .although expert search is a recent concern in the information retrieval community , there are already many research efforts addressing this specific task exploring different retrieval models .many of the most effective models for expert finding are mainly based in language models frameworks .the main problem of these methods is that they can only take into account textual similarities between the query topics and documents .more recently , there have been some works proposed in the literature which address the problem of expert finding as a combination of multiple sources of evidence . instead of only ranking candidates through textual similarities between documents and query topics ,the major concern in these approaches relies in how to combine different expertise evidences in an optimal way .many of the proposed approaches that follow this paradigm are based on supervised machine learning techniques or discriminative probabilistic models .although these methods have the advantage of being able to combine a large pool of heterogeneous data sources in an optimal way , they are not scalable to a real world expert finding scenario for the following reasons .first , the concept of expert itself is very ambiguous , since the expertise areas of a candidate are hard to quantify and the experience of a candidate is always varying through time .even when different people are asked their personal opinion about experts in some topic , they often disagree .moreover , people usually identify the most influential authors as experts , ignoring new emerging ones .second , supervised machine learning techniques require manually hand - labeled training data where the top experts for some topic are identified .since these relevance judgments are based on people s personal opinions , the system will only reflect the biases of the trainers , this way identifying more influential people than experts itself .furthermore , it is difficult to find a sufficiently large dataset , with the respective relevance judgments , which could be representative of a real world expert finding scenario . the lack of these hand labeled data constraints the system by only enabling a small subset of query - expert pairs to be trained . in traditional information retrieval, the combination of various ranking lists for the same set of documents is defined as rank aggregation .the techniques used to combine those ranking lists , in order to obtain a more accurate and more reliable ordering , are defined as data fusion . in the literature, these fusion techniques have been heavily used in multisensor approaches for both military and non - military applications .sensor data fusion is defined as the usage of techniques which enable the combination of data from multiple sensors in order to achieve higher accuracies and more inferences than a single sensor .these techniques are based on several computer science domains such as artificial intelligence , pattern recognition and statistical estimation . when we fuse sensor data , the levels of uncertainty arise and may affect the precision of the sensor fusion process , since incoming information provides uncertain and conflicting evidence. many previous works of the literature addressed this issue through the usage of the dempster - shafer theory of evidence in order to provide a better reasoning process .other authors showed that the success of this theory of evidence could be extended to other domains as well . in information retrieval systems , for instance, the dempster - shafer theory can be used to effectively quantify the relevance between documents and queries .the dempster - shafer theory of evidence may be seen as a generalization of the probability theory .the development of the theory has been motivated by the observation that traditional probability theory is not able to distinguish uncertain information . in the traditional probability theory, probabilities have to be associated with individual atomic hypotheses. only if these probabilities are known we are able to compute other probabilities of interest . in the dempster - shafer theory , however , it is possible to associate measures of uncertainty with sets of hypotheses , this way enabling the theory to distinguish between uncertainty and ignorance . in the domain of information theory ,the shannon s entropy has been successfully used to measure the levels of uncertainty associated to some random variable . in information retrieval ,since large datasets usually contain large amounts of noise and lack from relevant information , it is straightforward that when using fusion techniques there will be an increase in conflicting information from different sources of evidence and therefore the dempster - shafer theory of evidence plays an important role in addressing this problem .however , the current expert finding literature has been merging different evidences without taking into account the resulting conflicting information . in this work ,we propose a novel method for the expert finding task which has a similar performance to supervised machine learning approaches , but does not require any hand - labelled training data and can be easily scalable to a real world expert finding scenario , as well as any learning to rank problem .we suggest a multisensor fusion approach to find academic experts , where each candidate is associated to a set of documents containing his publication s titles and abstracts . in order to extract different sources of expertise from these documents, we defined three sensors : a text similarity sensor , a profile information sensor and a citation sensor .the text sensor collects events derived from traditional information retrieval techniques , which measure term co - occurrences between the query topics and the documents associated to a candidate .the profile sensor measures the total publication record of a candidate throughout his career , under the assumption that highly prolific candidates are more likely to be considered experts .and the citation sensor uses citation graphs to capture the authority of candidates from the attention that others give to their work in the scientific community .each sensor will rank the candidates according to the different evidences that they collected .most of the times will end up disagreeing between each other by considering different candidates as experts , resulting in a conflict and in a rise of uncertainty .we apply the dempster - shafer theory of evidence combined with shannon s entropy to resolve the conflict and come up with a more reliable and accurate ranking list .the main motivation in using the dempster - shafer theory of evidence in this problem is given by the fact that the data fusion techniques used in the expert finding literature can not deal with uncertainty when fusing the different sources of evidence .when the results obtained by each sensor are incompatible , a method is required to resolve this conflicting information and come up with a final decision .the dempster - shafer theory of evidence enables this information treatment by assigning a degree of uncertainty to each sensor .this is measured through the amount of conflicting information present in all sensors .a final decision is then made using the computed degrees of belief .we have evaluated our expert finding system in a dataset which lacks in relevant information about academic publications from the computer science domain and compared it with an enriched version of the same dataset .we chose both datasets in order to verify the performance of the proposed system in different scenarios where there is poor information and a lot of noise in the data as well as in situations where the dataset is complete and full of information , this way showing that our system can be scalable to any academic dataset .the main hypothesis of this work is that the dempster - shafer theory of evidence can provide better results than a standard rank aggregation framework , because through conflicting information , this theory can assign a degree of belief based on uncertainty levels to each sensor and come up with a final decision .the expert finding literature is based on two main datasets , an organizational dataset , which was made available by the text retrieval conference ( trec ) , and an academic dataset that is the dblp computer science bibliography dataset .many authors have performed many experiments in both datasets .the most representative works performed in the trec dataset belong to and . for the dblp dataset ,the most representative works belong to , which has made available a dataset containing only relevance judgments for the dblp dataset , and by . for this paper, we could not apply our multisensor approach to the trec dataset , because this dataset consists of a collection of web pages and mainly textual features could be extracted .in addition , the state of the art approaches that use this dataset are only based on the textual contents between the query topics and the documents .if we applied this dataset to our multisensor approach , we would only have a single sensor detecting textual events .this would be a disadvantage since there would not be any more inferences to improve the ones made by this textual sensor .the dblp dataset , on the other hand , contains the authors publication records , is very rich on citation links ( which enable the exploration of graph structures ) and contains the publications titles and abstracts . with this dataset, we could automatically extract different sources of evidence .for this reason , we based our experiments on the dblp computer science academic dataset . the main contributions of this paper are summarized as follows : 1 . a multisensor approach for expert finding .we offer an approach that gathers different information from data and enables the combination of different sources of evidence effectively .contrary to machine learning methods , our approach does not leverage on hand labeled data based on personal relevance judgments . instead , given a set of publication records , our method combines the inferences made by three different sensors , forming a more accurate and reliable ranking list . in the case of machine learning techniques, this could never happen , since the system would have to be trained using the personal opinions of individuals .thus , the system would only reflect the biases of the trainers . in this work, we defined three different types of sensors in order to estimate the level of expertise of an author : the textual similarity sensor , the profile information sensor and the citation sensor which detects citation patterns regarding the scientific impact of a candidate in the scientific community .since each sensor can detect various events for each candidate , we fuse the different events using a rank aggregation approach where we explore several state of the art data fusion techniques , namely combsum , borda fuse and condorcet fusion ( detailed in section [ sec : multisensor ] ) .the events detected by each sensor are based on the preliminary research in .the formalization of a dempster - shafer framework for expert finding .when fusing data from different sensors , each detecting different types of evidences , it is straightforward that each sensor will give more weight to different candidates according to the information that each one of them has collected .this leads to conflicting information between the sensors . illustrating this issue with a military application , in a presence of a plane various sensors must detect if it is friend or foe .if these sensors do not agree with each other , then we have conflicting information that has to be treated separately in order to come up with a decision if the plane is in fact friend or foe . following the multisensor literature, we decided to address this problem through a dempster - shafer framework allowing one to combine evidences from different sensors and arriving at a degree of belief ( represented by a belief function ) that takes into account all the available evidences collected by all sensors ( detailed in section [ sec : dempster - shafer ] ) .the main advantage of using this theory is that it enables the specification of a degree of uncertainty to each sensor , instead of being forced to supply prior probabilities that add to unity , just like in traditional probability theory .the usage of shannon s entropy formula to help uncovering the importance of each sensor .the dempster - shafer theory requires that we know how certain a sensor is when detecting that a candidate is an expert . in the literature this information is usually provided by the judgments of knowledgeable people . to avoid asking people their opinion about the accuracy of each sensor, we used the shannon s entropy formula to compute the degree of belief on each different sensor . by measuring the total entropy of each sensor ,we are able to provide to the dempster - shafer framework belief functions based in the amount of reliable information that each sensor can detect in the presence of a candidate , instead of being dependent on other people s judgments .+ the rest of this paper is organized as follows : section [ sec : related - work ] presents the main concepts and related works in the literature .section [ sec : multisensor ] explains the multisensor framework proposed in this paper , as well as all the events each sensor can detect and the data fusion techniques used fuse all the events perceived by each individual sensor .section [ sec : dempster - shafer ] formalizes the dempster - shafer theory of evidence .section [ sec : shannon ] details the shannon s entropy formula developed for this work .section [ sec : validation ] presents the datasets used in our experiments as well as the evaluation metrics used .section [ sec : results ] presents the results obtained in the experiments and a brief discussion .finally , section [ sec : conclusions ] presents the main conclusions of this work .the two most popular and well formalized methods are the candidate - based and the document - based approaches . in candidate - based approaches, the system gathers all textual information about a candidate and merges it into a single document ( i.e. , the profile document ) .the profile document is then ranked by determining the probability of the candidate given the query topics , figure [ fig : candidate - model ] . in the literature ,the candidate - based approaches are also known as model 1 in and query independent methods in . in document - based approaches ,the system gathers all documents which contain the expertise topics included in the query .then , the system uncovers which candidates are associated with each of those documents and determines their probability scores .the final ranking of a candidate is given by adding up all individual scores of the candidate in each document [ fig : document - model ] .document - based approaches are also known as model 2 in and query dependent methods in .experimental results show that document - based approaches usually outperform candidate - based approaches .the first candidate - based approach was proposed by where the ranking of a candidate was computed by text similarity measures between the query topics and the candidate s profile document . formalized a general probabilistic framework for modeling the expert finding task which used language models to rank the candidates . presented a general approach for representing the knowledge of a candidate expert as a mixture of language models from associated documents .later , and have introduced the idea of dependency between candidates and query topics by including a surrounding window to weight the strength of the associations between candidates and query topics . in what concerns the document - based approaches ,such model was first proposed by in the text retrieval conference ( trec ) of 2005 .they proposed a two - stage language model where the first stage determines whether a document is relevant to the query topics or not and the second determines whether or not a query topic is associated with a candidate .the most well known document - based approach from the literature is model 2 , proposed by . in thisapproach language models are used to rank the candidates according to the probability of a document model having generated the query topics .later , explored the usage of positional information and formalized a document representation which includes a window surrounding the candidate s name .methods apart from the candidate - based and the document - based approaches have also been proposed in the expert finding literature . for instance , formalized a voting framework combined with data fusion techniques . each candidate associated with documents containing the query topics received a vote and the ranking of each candidate was given by the aggregation of the votes of each document through data fusion techniques . proposed a query sensitive authorrank model .they modeled a co - authorship network and measured the weight of the citations between authors with the authorrank algorithm . since authorrank is query independent , the authors added probabilistic models to refine the algorithm in order to encompass the query topics . have proposed the person - centric approach which combines the ideas of the candidate and document - based approaches .their system starts by retrieving the documents containing the query topics and then ranks candidates by combining the probability of generation of the query by the candidate s language model .more recently , proposed a learning to rank approach where they created a feature generator composed of three components , namely a document ranking model , a cut - off value to select the top documents according the query topics and rank aggregation methods . using those features , the authors made experiments with the adarank listwise learning to rank algorithm , which outperformed all generative probabilistic methods proposed in the literature . have also explored different learning to rank algorithms to find academic experts , where they defined a whole set of features based on textual similarities , on the author s profile information and based on the author s citation patterns . proposed a learning framework for expert search based on probabilistic discriminative models .they defined a standard logistic function which enabled the integration of various sets of features in a very simple model .their features included , for instance , standard language models , document features ( ex . title containing query topics ) , proximity features , etc .the dempster - shafer theory of evidence has been widely used in the literature , specially in the sensor fusion domain .for instance , used a set of artificial neural networks to identify the degree of damage of a bridge .they applied the dempster - shafer theory together with shannon s entropy to combine the events detected by the artificial neural networks in order to address the uncertainties arised in each network . formulated a general framework for context - aware ( i.e. , computers trying to understand our physical world ) . in their work , they used a set of sensors in order to generate fragments of context information .the dempster - shafer theory of evidence was used to fuse the information from the various sensors and to manage the uncertainties as well as resolving the conflicting information between them .another example of the application of this framework is in e - business with the work of .the authors proposed a modification of this framework and developed the hybrid dempster - shafer method to estimate the reliability of business process and quality control .their hybrid dempster - shafer theory was based on entropy theory and information of co - evolutionary computation .their results showed significant improvements over the standard dempster - shafer framework .the rank aggregation framework is often used together with data fusion methods that take their inspiration on voting protocols proposed in the area of statistics and in the social sciences . suggested a classification to distinguish the different existing data fusion algorithms into two categories , namely the positional methods and the majoritarian algorithms .later , have also proposed the score aggregation methods .the positional methods are characterized by the computation of a candidate s score based on the position that the candidate occupies in each ranking lists given by each voter .if the candidate falls in the top of the ranked list , then he receives a maximum score .if the candidate falls in the end of the list , then his score is minimum .the most representative positional algorithm is probably borda count .jean -charles de borda proposed this method in 1770 as being the election by order of merit method . later , computer scientists mapped this method to combine data from different ranking lists , proving its effectiveness .the majoritarian algorithms are characterized by a series of pairwise comparisons between candidates .that is , the winner is the candidate which beats all other candidates by comparing their scores between each other .the most representative majoritarian algorithm is probably the condorcet fuse method proposed by .however , there have been other proposals based on markov chain models or on techniques from multicriteria decision theory . finally , the score aggregation methods determine the highest ranked candidate by simply combining his ranking scores from all the participating systems . examples of such methods are combsum , combmnz and combanz , all proposed by . in this article , we made experiments with representative state of the art data fusion algorithms from the positional , majoritarian and score aggregation approaches .section [ sec : multisensor ] details the rank aggregation approaches .the multisensor approach proposed in this work contains three different sensors : a text sensor , a profile sensor and a citation sensor .the text sensor considers textual similarities between the contents of the documents associated with a candidate and the query topics , in order to build estimates of expertise .it is assumed that if there are textual evidences of a candidate where the query topics occur often , then it is probable that this candidate is an expert in the topic expressed by the query .the profile sensor considers the amount of publications that a candidate has made throughout his career .highly prolific candidates are more likely to be considered experts . andfinally , the citation sensor considers the impact of a candidate in the scientific community and also relies on linkage structures , such as citation graphs , to determine the candidate s knowledge .candidates with high citations patterns are assumed to be experts .the multisensor approach proposed in this paper consists in two different fusion processes : ( i ) one which will fuse all the events that each single sensor detected in a presence of a candidate and ( ii ) another which will fuse the information of the different sensors taking into account conflicting and uncertain data between them .the first fusion process will be addressed through a rank aggregation framework with state of the art data fusion techniques and the second one will be addressed as a multisensor fusion process using the dempster - shafer theory of evidence combined with shannon s entropy . rank aggregation can be defined as the problem of combining different ranking orderings over the same set of candidates , in order to achieve a more accurate and reliable ordering .figure [ fig : datafusion ] illustrates the rank aggregation framework proposed in this paper . given a query ,the system starts by retrieving all the publication records which contain the query topics and extracts all the authors associated to those documents. these authors will be the candidates which will serve as inputs to our rank aggregation framework .the framework is given as input a set of candidates and a query expressing a topic of expertise .each sensor will use their own event extractor which detects a set of different events in the presence of a set of candidates . in the event extractor of every sensor ,each event is responsible to order the detected set of candidates in descending order of relevance .thus , each sensor will contain different ranking lists which need to be fused .a data fusion algorithm is then applied in order to combine the various ranking lists detected in each sensor .the output of this framework is a list of candidates ordered by their expertise level towards the query topic .the events detected by each sensor are similar to the features proposed in the preliminary research work and therefore will not be detailed . table [ tab : features ] summarizes all the events that can be detected by each sensor ..the various sets of events detect in each different sensor [ cols="<,<,<,^,^",options="header " , ] as one can see , all three sensors disagree with each other in what concerns the final ranked list of authors .the text sensor considers an expert , whereas the profile and the citation sensor find more relevant . andin the same way , the sensors disagree between each other about the authors that hold the second and the third positions of the ranking list .if we applied combsum again in these three sensors , this conflicting information would not be treated .combsum would only sum again all the normalized scores and the output would be the final ranked list .that is , the authors which already had the highest scores would remain in the top of the list and the author with zero scores would remain with a zero score after the fusion , not giving them a chance to go up in the final ranking list . since we are dealing with different sources of evidence with conflicting information ,the next step of the algorithm is to apply the dempster - shafer theory of evidence together with shannon s entropy .[ sec : dempster - shafer ] the dempster - shafer theory of evidence provides a way to associate measures of uncertainty to sets of hypothesis when the individual hypothesis are imprecise or unknown .all possible mutually exclusive hypothesis are contained in a _ frame of discernment _ . in the scope of this work, our hypothesis will be if either an author is an expert or not .for instance , given two authors and , , the frame of discernment is then an enumeration of all possible combinations of these authors , that is , \ { \{ } , \{ } , \{ } , } , performing a total of elements ( ) .the main advantage of using this theory , is that it enables the specification of a degree of uncertainty , instead of being forced to supply prior probabilities that add to unity , just like in traditional probability theory .the dempster - shafer theory of evidence enables the definition of a belief mass function which is a mapping of to the interval between 0 and 1 .it can tell how relevant an author is when considering all the available evidence that supports and not any subsets of . in the case of our multisensor approach, each sensor will provide a belief mass function to each author contained in the frame of discernment , by detecting the events associated with each one of them .the belief mass function is applied to each element of the frame of discernment and has three requirements : * the mass function has to be a value between 0 and 1 , ] .the belief is the lower bound of the confidence interval and is defined as being the total evidence that supports the hypothesis , that is , it is the sum of all the masses of the subsets associated to set . in the same way , the plausibility corresponds to the upper bound of the confidence interval and is defined as being the sum of all the masses of the set that intersect the set of interest . to combine the evidences detected by two sensors and , the dempster - shafer theory provides a combination rule which is given by equation [ eq : rule ] . in the above formula , measures the amount of conflict between the two sensors and is given by equation [ eq : k ] . in our approach , the mass functions are used to represent the author s relevance towards the query topics , however the dempster - shafer theory requires that we know how certain a sensor is when detecting that a candidate is an expert . in traditional applications of the dempster - shafer theory ,these values are given by experts on the topic , however we did not think that asking for someone s opinion to give estimates about the accuracy of each sensor towards an author would bring a solid solution for our multisensor expert finding approach , and therefore we used a representative probabilistic formula of the information theory literature to address this issue , namely the shannon s entropy formula .in information theory , entropy is defined as a measure of uncertainty of a random variable .let be a discrete random variable representing a sensor .assume that is the set of all events detectable by sensor and is the set of all authors .let be a function which determines if the score detected by event for author is relevant ( that is , returns 1 if the score is bigger than zero ) , and , be respectively the total number of authors which are being analyzed by and the total number of events detected by the sensor .the entropy of is defined as : in the above formula , if then the entropy is , which means that the event provides consistent information for all authors and therefore there are no levels of uncertainty associated to the sensor . on the other hand , if there are high levels of uncertainty associated to the event , then the maximum information associated with a sensor is given when the events are equally distributed over all authors just like described in equation [ eq : norm ] . in conclusion , returning to the dempster - shafer theory of evidence , the mass function of an author detected by some sensor will be given by equation [ eq : d - s - final ] , where represents the rank aggregation score of candidate using a data fusion algorithm . for more details in how to use the following equation, please refer to the book of .section [ sec : ex1 ] shows a detailed example in how this formula is applied in our system . continuing the example started in section [ sec : ex1 ] , after fusing the data in each sensor , we ended up noticing that the sensors did not agree with each other about the top experts , resulting in conflicting information that needs to be treated . before applying the dempster - shafer theory of evidence , we need to determine the importance of each sensor .if the profile sensor is nt very reliable , then when fusing data , one should take that into consideration by decreasing the scores of the authors detected in that sensor . to compute the relevance weight of each sensor , we made use of shannon s entropy formula described in equation [ eq : norm ] . in the previous example , in table [ tab : author_sensor] there are no zero entries in the unnormalized scores of each sensor .that means that , in this example , all sensors are equally important and therefore the maximum entropy for authors with non - zero detected events for each one of them is given by the following formula : the shannon s entropy formula of each sensor is given by equation [ eq : shannon ] and for this example is computed in the following way : at this point , we can compute the overall mass function of the sensors which is given by : and since the mass function requires that the sum of its elements is , we need to normalize these values . now , we need to add the previous combsum fusion results to the mass function as well , such that the sum of all elements is one .table [ tab : sensor - mass ] shows such values . at this point, we are ready to apply the dempster - shafer theory of evidence framework .we will start fusing the text sensor with the profile sensor .the mass functions of the text sensor are discriminated in table [ tab : m_ts ] and the mass functions of the profile sensor in table [ tab : m_ps ] .note that these tables are in accordance with equation [ eq : d - s - final ] . if is a single author ,we apply the normalized scores from the rank aggregation fusion process and if is a set of authors detected by each sensor , then we apply the normalized entropy score of the sensor .the fusion process under the dempster - shafer theory of evidence framework is given through the computation of a given by table [ tab : fusion1 ] .in this tableau , we perform the intersection between each element of the text sensor , with each element of the profile sensor .for instance , m(\{author } ) m(\{author , author , author } ) = \{author } with probability .note that we multiply the probabilities , because the authors are considered independent between the two different sensors. the choice of an author as expert in the text sensor does not affect the choice of another author in the profile sensor .whenever the intersection gives an empty set , then this means that there is a conflict between the sensors and the combination rule in equation [ eq : rule ] must be applied .this rule is given by summing all the probabilities of all the events which ended up as an empty set and subtract it by .then , we divide each of the mass functions obtained in the tableau by this value .the following calculations demonstrate the computations of the fused mass functions using the combination rule . + what is interesting to notice in the above calculations is that no author ended up with a zero probability , although each sensor detected that some authors were irrelevant .if we did not use shannon s entropy to weight the importance of each sensor , author and author would end up with a probability of zero , meaning that these authors are completely irrelevant for the query .shannon s entropy enabled to give some belief in these authors , given the importance of their respective sensor , and enabled a more consistent and reliable ranking for each one of them .next , the same process is used to fuse the computed results with the citation sensor . the algorithm ends by retrieving the final ranking list : author(0.4428 ) author(0.3274 ) author(0.1359 )the multisensor approach required a large dataset containing not only textual evidences of the candidates knowledge , but also citation links . in this work, we made experiments with two different versions of the computer science bibliography dataset , also known as dblp . the dblp dataset has been widely used in the expert finding literature through the works of and .it has also been extensively used in citation analysis in the works of .it is a large dataset covering publications in both journals and conferences and is very rich in citation links .the two versions of the dblp dataset tested in this work correspond to the proximity version and an enriched dblp version .proximity contains information about academic publications until april 2006 .it is a quite large dataset containing more than 100 000 citation links and 400 000 authors , however it does not provide any additional textual information about the papers besides the publication s titles . on the other hand ,the enriched version of the dblp dataset , which has been made available by the arnetminer project , is a large dataset covering more than one million authors and more than two million citation links .it also contains the publication s abstracts of more than 500 000 publications .table [ t1 ] provides a statistical characterization of both datasets .we made experiments with both datasets to verify the scalability of our method in the presence of datasets containing a lot of information and datasets full of noise and lacking on relevant information . to validate the different experiments performed in this work, we required a set of queries with the corresponding author relevance judgements .we used the relevant judgements provided by arnetminer , which have already been used in other expert finding experiments .the arnetminer dataset comprises a set of 13 query topics from the computer science domain , and it was mainly built by collecting people from the program committees of important conferences related to the query topics .table [ judgements ] shows the distribution for the number of experts associated to each topic , as provided by arnetminer . since the arnetminer dataset contains only relevant judgements for all query topics , we complemented this dataset by adding non relevant authors for each of the query topics .our validation set included all relevant authors plus a set of non relevant authors until we end up with a set of 400 authors .these non relevant authors were added by searching the database with the keywords associated to each topic and that were highly ranked according to the bm25 metric .thus , the validation sets built for each dataset contained exactly the same relevant authors , but had different non relevant ones .the performance of our multisensor approach was validated through the usage of the mean average precision ( map ) metric and precision at rank ( p ) .precision at rank is used when a user wishes only to look at the first retrieved domain experts .the precision is calculated at that rank position through equation [ eq : precisionrank ] . in the formula , is the number of relevant authors retrieved in the top _ k _ positions . only considers the top - ranking experts as relevant and computes the fraction of such experts in the top- elements of the ranked list .the mean of the average precision over test queries is defined as the mean over the precision scores for all retrieved relevant experts . for each query ,the average precision ( ap ) is given by : = \frac { \sum_{rn=1}^{n } ( p(rn ) \times rel(rn ) ) } { r}\ ] ] in the formula , is the number of candidates retrieved , is the rank number , returns either 1 or 0 depending on the relevance of the candidate at . is the precision measured at rank and is the total number of relevant candidates for a particular query .we also performed statistical significance tests over the results using an implementation of the two sided randomization test made available by mark d. smuckerthis section presents the results of the experiments performed in this work , more specifically : 1 . in section [ subsec:1 ], we compared our multisensor approach against a general rank aggregation framework , using only the proximity dataset .the results of this experiment showed that the dempster - shafer theory of evidence combined with shannon s entropy enables much better results than the standard rank aggregation approach .2 . in section [ subsec:2 ] , we determined the impact of each sensor of our multisensor approach using the proximity dataset .experiments revealed that the combination of the text similarity sensor together with the citation sensor achieved the best results in this dataset .results also unveiled that combining estimators based on the author s publication record and on their citations patterns in the scientific community achieved the poorest results .3 . in section [ subsec:3 ], we repeated the experiments of our multisensor approach on an enriched version of the dblp dataset .we demonstrated that the dempster - shafer theory of evidence also provides better results when used in datasets which do not have high levels of conflicting information .these results prove how general our multisensor approach can be , since it provides better results than the standard rank aggregation approach , whether using datasets with poor information ( with high levels of conflict and uncertainty ) or with enriched datasets ( low levels of uncertainty and high levels of confidence ) .4 . in section [ subsec:5 ], we compared our multisensor approach against representative state of the art works .results showed that our approach achieved a map of more than 66% when compared to non - machine learning works of the state of the art , becoming one of the top contributions in the literature .more recently , approaches based on supervised machine learning techniques have been proposed in the literature of expert finding . in section [ subsec:6 ]we compared our multisensor approach against two supervised learning to rank techniques from the state of the art .the results obtained showed that the usage of a supervised approach does not bring significant advantages to the system when compared to our multisensor data fusion approach , concluding that this approach provides very competitive results without the need of hand - labelled data with personal relevance judgements .[ subsec:1 ] the main hypothesis motivating this experiment is to verify if our multisensor approach , combined with the dempster - shafer theory of evidence , achieves better results than a standard rank aggregation approach . to validate this hypothesis , we experimented our multisensor approach with different data fusion techniques and compared it with the rank aggregation framework using the same fusion algorithms .table [ tab : proximity - d - s - results ] presents the obtained results for the three proposed sensors , more specifically the text similarity sensor , the profile information sensor and the citation sensor . the results on table [ tab : proximity - d - s - results ] show that our multisensor approach using the dempster - shafer framework outperformed the general rank aggregation approach . when two sensors do not agree between each other , it is difficult to get a final decision whether a candidate is an expert or not . in these situations ,a standard rank aggregation approach simply ignores the conflict and applies a data fusion technique to merge the scores of that candidate in both sensors .the dempster - shafer theory of evidence , on the other hand , assigns a degree of uncertainty to each sensor which is measured through the amount of conflicting information present in both of them .a final decision is then made using the computed degrees of belief .this experiment shows that , when merging different sources of evidence , conflicting information should not be ignored .thus , the dempster - shafer theory of evidence plays an important role in solving these conflicts and providing a final decision . in conclusion , these results support the main hypothesis of this work which so far has been ignored in the expert finding literature : when merging multiple sources of evidence , it is necessary to apply methods to solve conflicting information , this way enabling a more accurate and more reliable reasoning .the best performing data fusion technique is the majoritarian condorcet fusion algorithm . in our multisensor approach, condorcet fusion achieved an improvement of more than 70% , in terms of map , when compared to the same algorithm in the standard rank aggregation approach .[ subsec:2 ] the data reported in the previous experiment showed that the proposed multisensor approach , combined with the condorcet fusion algorithm , achieved the best results .these results were achieved by combining three sensors : the text similarity sensor , the profile information sensor and the citation sensor . in this experiment , we are interested in determining the impact of the different sensors in out multisensor approach . to validate this, we separately tested our multisensor approach together with ( i ) the textual similarity and the profile sensors , ( ii ) the textual similarity and the citation sensors and ( iii ) the profile and the citation sensors .table [ tab : proximity - different - sensors ] shows the obtained results .table [ tab : proximity - different - sensors ] shows that the best results were achieved when the text similarity sensor works together with the citation sensor .this means that the presence of the query topics in the author s document evidences together with information of the author s impact in the scientific community plays an important role to determine if some author is an expert in some specific topic .the results also show that taking into account the publication record of the authors does not contribute for the expert finding task in such framework .the significance tests performed show that the improvements achieved by the text similarity sensor together with the citation sensor are statistically more significant , in terms of map , than all the other combinations of sensors tested .thus , the text sensor and the citation sensor acquired an improvement of more than 42% over the profile sensor combined with the citation sensor , demonstrating their effectiveness .[ subsec:3 ] the previous results demonstrated the effectiveness of our multisensor approach using the dempster - shafer theory over poor datasets . in this experiment , we are concerned with the performance of our multisensor approach in enriched datasets , more specifically in the enriched version of the dblp dataset .the results are illustrated in table [ t4 ] .table [ t4 ] shows that the best performing approach was our multisensor approach together with the condorcet fusion algorithm .this approach achieved an improvement of more than 46% when compared with the standard rank aggregation approach using the same fusion algorithm .although , our best performing multisensor fusion approach using dempster - shafer outperformed all other standard rank aggregation methods , we can not conclude that our approach is better , since there were no statistical significances detected . in a separate experiment, we also tried to determine which combinations of sensors provided the best results for this dataset .table [ tab : dblp - impact - sensors ] shows the obtained results . in this experiment ,the best results were achieved when the text similarity sensor is combined with the profile information sensor .this shows that , for this specific dataset , the presence of the query topics in the author s publication titles and abstracts together with the author s publication records , are very strong estimators of expertise .thus , this information is vital to determine if someone is an expert in some topic .one can also see that the combination of the profile sensor with the citation sensor achieved the poorest results .these results are the same as the ones reported for the proximity dataset , in table [ tab : proximity - different - sensors ] . in this experiment ,the improvements of the best performing sensors ( text similarity and profile ) were statistically more significant than the combination of all remaining sensors , in terms of map , this way showing the effectiveness of these estimators of expertise .[ subsec:5 ] in this experiment , we were concerned with the impact of our multisensor approach in the state of the art .table [ tab : comparison - with - state - of - the - art ] presents the results of the baseline models proposed by , namely the candidate - based model 1 and the document - based model 2 . in order to make the comparison fair , we used the code made publicly available by k. balog at http://code.google.com / p / ears/. experiments revealed that model 1 and model 2 have a similar performance in such dataset , but achieved a lower performance when compared to the multisensor approach . in model 1 , when an author publishes a paper which contains a set of words which exactly match the query topics , this author achieves a very high score in this model .in addition , since we are dealing with very big datasets , there are lots of authors in such situation and consequently the top ranked authors are dominated by non - experts , while the real experts will be ranked lowered . in model 2 ,since we only contain the publication s titles and some abstracts , the query topics might not occur very often in publications associated to expert authors .in such model 2 , the final ranking of a candidate is given by aggregating the scores that he achieved in each publication .if the document s abstract or title does not contain or is very poor in query topics , then the candidate will receive a lower score in the final ranking list .our multisensor approach outperformed these state of the methods , because it enables the combination of various sources of evidence instead of just using textual similarities between query terms and documents .[ subsec:6 ] in the task of expert finding , there have been several effective approaches proposed in the literature , exploring different retrieval models and different sources of evidence for estimating expertise .more recently , some works have been proposed in the literature which use supervised machine learning techniques to combine different sources of evidence in an optimal way . in this section ,we reproduce the experiments of some of those works that use learning to rank algorithms to effectively combine different estimators of expertise .more specifically , we applied the svmmap and svmrank to our test set in order to determine the impact of our multisensor approach against a supervised one .the general idea of learning to rank is to use hand - labelled data to train ranking models , this way leveraging on data to combine the different estimators of relevance in an optimal way . in the training process , the learning algorithm attempts to learn a ranking function capable of sorting experts in a way that optimizes a bound of an information retrieval performance measure ( e.g. mean average precision ) , or which tries to minimize the number of misclassifications between expert pairs , or which even tries to directly predict the relevance scores of the experts . in the test phase , the learned ranking function is applied to determine the relevance between each expert towards a new query .the two algorithms tested were svmmap and svmrank .the svmmap method builds a ranking model through the formalism of structured support vector machines , attempting to optimize the metric of average precision ( ap ) .the basic idea of svmmap is to minimize a loss function which measures the difference between the performance of a perfect ranking ( i.e. , when the average precision equals one ) and the minimum performance of an incorrect ranking .the svmrank method also builds a ranking model through the formalism of support vector machines .however , the basic idea of svmrank is to attempt to minimize the number of misclassified expert pairs in a pairwise setting .this is achieved by modifying the default support vector machine optimization problem , by constraining the optimization problem to perform the minimization of the number of misclassified pairs of experts .since the proximity dataset contains very sparse data , due to its lack of information , the task of expert finding in this dataset would be very trivial using a supervised machine learning approach .since many non relevant authors do nt have many information associated to them , it would be easy to find a hyperplane which would be able to classify an author as being relevant or non relevant .for this reason , we performed a supervised machine learning test in the enriched dataset in order to make the task more difficult .table [ tab : comparison - with - supervised - dblp ] presents the results of these two algorithms , for the enriched dblp dataset , and their comparison against our multisensor fusion approach using the dempster - shafer theory of evidence .table [ tab : comparison - with - supervised - dblp ] shows that the results obtained in the different algorithms revealed slightly variations when concerning the mean average precision metric , leading to the conclusion that the application of machine learning techniques to this dataset do not bring great advantages .in addition , our multisensor approach achieved better results than the supervised learning to rank algorithms for the performance measure .this metric is very important in the task of expert finding in digital libraries , since the user is only interested in searching for the top relevant experts of some topic .since the statistical significance tests performed did not accuse any differences between the algorithms , then we can state that our multisensor approach achieves a performance similar to supervised machine learning techniques and it even has the advantage of not requiring hand - labelled data with personal relevance judgements .this means that the proposed method is general and can be scalable to a real world scenario , while machine learning approaches can only be trained with a small set of data which is not representative of a real expert finding scenario .finally , figure [ fig : dblp - vs - supervised ] supports the above observations , showing the average precision of the three algorithms for each query .one can easily see that the results obtained show slightly variations between the algorithms demonstrating that our method achieves a similar performance to the supervised learning to rank algorithms .we proposed a multisensor data fusion approach using the dempster - shafer theory of evidence together with shannon s entropy . in order to extract different sources of expertise from these documents, we defined three sensors : a text similarity sensor , a profile information sensor and a citation sensor .the text sensor collects events derived from traditional information retrieval techniques , which measure term co - occurrences between the query topics and the documents associated to a candidate .the profile sensor measures the total publication record of a candidate throughout his career , under the assumption that highly prolific candidates are more likely to be considered experts . andthe citation sensor uses citation graphs to capture the authority of candidates from the attention that others give to their work in the scientific community .experimental results revealed that the dempster - shafer theory of evidence combined with the shannon s entropy formula is able to address an important issue which so far has been ignored in the expert finding literature : conflicting information . when merging information from different sources of evidence , there will always be significant levels of uncertainty .these uncertainty levels arise because we only have a partial knowledge of the state of the world .the dempster - shafer framework can lead with these kind of problems through the usage of a combination rule which measures the total amount of conflicts between these three sensors . this way a final accurate decision can be made from noisy data .we compared our multisensor approach against representative approaches of the state of the art of expert finding .our method showed great improvements , demonstrating that the levels of uncertainty provide a very important issue , not only for expert finding , but also for general ranking problems of information retrieval .it was interesting to notice that our approach has also a good performance on datasets which lack on information , showing that the dempster - shafer theory of evidence can be also effective in poor datasets , making the approach scalable to any information retrieval tasks such as entity ranking or in search engines in order to address the problem of uncertainty . finally , we tested our algorithm by comparing it to supervised machine learning approaches of the state of the art which use supervised learning to rank techniques .although these approaches usually provide good results for this task , they suffer from the disadvantage of the lack of hand - labelled data with relevance judgements about the level of expertise of an author towards a query .even the term expert is hard to define ,since the expertise areas of a candidate are difficult to quantify and the experience of a candidate is always varying through time . as a consequence, this hand - labelled data will only be the evidence of the trainers judgements and the final system will only reflect their biases . in this scope ,our multisensor approach is more general and more useful than supervised approaches , since it does not require the training of a system and therefore it is much faster to implement and scalable to a real world scenario .our method is more focused in finding information from the data given , rather than finding patterns based on personal relevance judgements and this is a major advantage towards supervised learning to rank approaches .43 natexlab#1#1[2]#2 , , , , in : . , , , , ( ) . , , , , ( ) ., , , , , bilkent university , ., , , , , in : . , , , , , in : . , , , ., , , , in : . , , , , ( ) . , , , , , in : . , , ( ) ., , , , in : . , , , in : . , , , in : . , , in : . , , , , , ( ) . , , in : . , , , in : ., , , ( ) . , , , , ( ) ., , , , , in : . , , , , ., , , in : . , , , in : ., , , in : . , , master s thesis , instituto superior tcnico , technical university of lisbon , portugal , ., , , , in : , lecture notes in computer science , , , pp . ., , , in : . , , , in : . , , , ., , , ( ) . , , , in : . , , , , ( ) ., , , ( ) . , , , , in : . , , , , in : . , , , , , ( ) . , , ( ) ., , , , , in : . , , , , , , , in : . , , , , in : . , , , , , in : . , ,
|
expert finding is an information retrieval task concerned with the search for the most knowledgeable people , in some topic , with basis on documents describing peoples activities . the task involves taking a user query as input and returning a list of people sorted by their level of expertise regarding the user query . this paper introduces a novel approach for combining multiple estimators of expertise based on a multisensor data fusion framework together with the dempster - shafer theory of evidence and shannon s entropy . more specifically , we defined three sensors which detect heterogeneous information derived from the textual contents , from the graph structure of the citation patterns for the community of experts , and from profile information about the academic experts . given the evidences collected , each sensor may define different candidates as experts and consequently do not agree in a final ranking decision . to deal with these conflicts , we applied the dempster - shafer theory of evidence combined with shannon s entropy formula to fuse this information and come up with a more accurate and reliable final ranking list . experiments made over two datasets of academic publications from the computer science domain attest for the adequacy of the proposed approach over the traditional state of the art approaches . we also made experiments against representative supervised state of the art algorithms . results revealed that the proposed method achieved a similar performance when compared to these supervised techniques , confirming the capabilities of the proposed framework .
|
suppose alice wants to send a message to bob but they can communicate only through a one - way classical noisy channel .how much information can she send to him on average , such that bob learns alice s message with zero probability of error ?this is known as the zero - error channel - coding problem . since its introduction by shannon , several generalizations to multi - party settingshave been proposed and a large research area in the intersection of information theory and combinatorics has been developed ( see for a survey ) . recently , cubitt et al . introduced the entanglement - assisted version of the two - party problem : how much data can alice send to bob with zero error when they are connected through a one - way classical noisy channel and share an entangled quantum state ? does entanglement allow to transmit more information ?an affirmative answer for the latter question was given in and where are shown channels exhibiting a separation between the entanglement - assisted and classical communication .two parties can communicate strictly more if they share a certain entangled state than what they could do in the classical case .note that this separation holds only for the special case where we want the communication to succeed with zero error . as shown in ( * ? ? ?* theorem 1 ) , sharing entanglement does not provide any advantage in the more general case of vanishing error probability , where we ask the probability of error to asymptotically go to zero as the number of uses of the channel goes to infinity .inspired by these results , we continue this line of research by investigating the effect of quantum entanglement in two multi - party zero - error communication scenarios . in the first situation we study, one sender is connected through identical classical channels to multiple receivers .this is known as a _ compound channel_. we prove that , for any fixed number of uses of the channel , entanglement might improve the communication only up to a certain number of receivers ( theorem [ thm : qmonogamy ] ) .the effect is due to the monogamy of entanglement ( or more generally , of non - signaling correlations ) . indeed ,if the number of receivers is greater than some threshold ( which depends uniquely on the channel and the number of channel uses ) , we show that entanglement does not help .however , for any constant number of receivers , we can build a compound channel ( based on a family studied in ) for which there is a separation between the entanglement - assisted and classical setting ( corollary [ cor : compound ] ) . in the second problem ,multiple senders cooperate to communicate with a single receiver through identical classical channels .our theorem [ thm : sep_c_l ] shows that there exist channels for which entanglement increases the amount of communication that can be sent , independently from the number of senders .more surprisingly , there are channels for which entanglement allows a joint strategy among the senders that is strictly better than the sum of the individual strategies ( theorem [ thm : disjointalpha ] ) .if each sender is able to transmit messages using entanglement , there is a joint entanglement - assisted strategy for senders that allows to communicate strictly more than messages .this is in contrast with the classical case where this phenomenon cannon happen .the rest of the paper is organized as follows . in section [ sec : prel ] we introduce the basic notation and the two - party problem . in section[ sec : rec ] we study the entanglement - assisted one - sender and multi - receiver situation . in section [ sec : send ] we present the effect of entanglement when there are multiple cooperating senders and one receiver .section [ sec : con ] contains the conclusions and some open questions .we denote with ] , with the kronecker delta function ( if and if ) and with the identity matrix .consider a positive semidefinite operator that acts on a bipartite finite - dimensional hilbert space .we denote with the partial trace of over the subspace .moreover suppose acts on a finite - dimensional -partite hilbert space which we denote as .with we denote the partial trace of over all the subspaces but the -th one , _i.e. _ , . throughout the paper ,the logarithms are binary and the graphs are assumed to be undirected and simple . for any graph , we denote with and its vertex and edge set .if two vertices are equal or adjacent we write , analogously we denote if they are distinct and adjacent .the complement graph of , denoted by , is the graph on the same vertex set as where two distinct vertices are adjacent if and only if they are non - adjacent in .an _ independent set _ of is a subset of that contains only pairwise non - adjacent vertices .the _ independence number _ is the maximum cardinality of an independent set of .a coloring is a partition of the vertex set into independent sets .the _ chromatic number _ is the minimum cardinality of a coloring .clique _ is a set of pairwise adjacent vertices .the _ edge - clique cover number _ is the smallest number of cliques that together cover all the edges of the graph .we denote with the edge - clique cover number of plus the number of isolated vertices of .an _ orthogonal representation _ of a graph is a map from the vertex set into the -dimensional unit sphere such that adjacent vertices are mapped to orthogonal vectors .the minimum dimension for which such a representation exists is denoted as the _ orthogonal rank _ .the graph is the _ complete graph _ on vertices , where every pair of distinct vertices is adjacent .the _ orthogonality graph _ has as vertex set all the vectors in and two vertices are adjacent if the corresponding vectors are orthogonal .the _ strong product graph _ of and is denoted by .it has vertex set ( where denotes the cartesian product ) and a pair of distinct vertices is adjacent if in and in .-th strong graph power _ of , denoted by , is the strong product graph of copies of .its vertex set is the cartesian product of copies of and the pair of distinct vertices forms an edge in if in for all ] and the pair is adjacent if in and ] and the pair is adjacent if in , ] .the lovsz theta number of a graph is equal to where means that is a positive semidefinite matrix . as is the optimal value of a positive semidefinite program ,it can be computed up to any approximation in polynomial time in the number of vertices . for any graph have . among the many useful properties of lovsz theta number, we will use that and for every pair of graphs and .moreover is monotone non - decreasing under taking subgraphs and , for all ( see for a survey on the properties of ) .imagine the following scenario : alice wants to communicate a message to bob but they can only use a one - way classical noisy channel .the channel is fully characterized by its finite input set , its finite output set and a probability distribution for every , where is the probability of bob receiving outcome given that alice has sent input . to communicate a message ] and sends through the channel .bob receives output with probability and uses a decoding function ] to bob , alice performs a measurement ( which depends on ) on her part of the entangled state , and sends its outcome through the channel . with probability , bob receives and uses this information to perform a measurement } ] such that , u \simeq v \in v(g),\ ] ] where is the confusability graph of .bob has to be able to perfectly distinguish between distinct messages and , whenever alice channel s input might be confusable .this entanglement - assisted variant has been introduced by cubitt et al . . setting they obtained the following concise definition of the single channel use and of the entangled shannon capacity .[ def : qindep ] for a graph , the _ entanglement - assisted independence number _ is the maximum such that there exist positive semidefinite operators ,\ , u\in v(g)\} ] to every bob , with whom she shares an entangled state .she performs a measurement on her part of the entangled state obtaining as outcome .this outcome is used as input for all the channels and the -th bob receives from .each bob performs a measurement ( which depends on ) on his part of the entangled state , getting a message as outcome .the protocol works if is equal to for every , _ i.e. _ , every bob is able to perfectly learn alice s original message . , width=302 ] the protocol for the entanglement - assisted compound channel is described in figure [ fig : broadcast ] . as in the single - receiver case, the protocol depends only on the confusability graph of the channel and we can define the following quantities .[ def : qcomp ] for a graph , the _ entanglement - assisted compound independence number _ with receivers is defined as the maximum such that there exist positive semidefinite operators ,\ , u\in v(g)\} ] only depends on the corresponding inputs since any entanglement - assisted strategy is also non - signaling , the parties can always communicate at least as much information using a non - signaling strategy as they can using entanglement .since we are studying the problem of sending information over a channel with zero - error , we have restricted our attention to the properties of the confusability graph of the channel . however, many channels can have the same confusability graph and , unlike the classical and entanglement - assisted capacities , the non - signaling capacity depends on the particular channel . for our purposes we are interested in the particular channel that minimizes the number of outputs while keeping the same confusability graph .notice that every output of a channel defines a clique or an isolated vertex in the confusability graph .therefore , we fix a edge - clique covering of the confusability graph of minimum cardinality ( which might not be unique ) , we add the isolated vertices to obtain a clique covering of cardinality , and we consider the channel that has outputs . in other words ,we take a channel which has one output per element of the edge - clique covering plus one output per isolated vertex . for the two - party zero - error channel - coding problem ,the non - signaling version was studied in ( see also ) . thereit is shown that the non - signaling zero - error channel capacity has an elegant closed formula that can be computed from the description of the channel .in the two - party case , a protocol for a single use of the channel ( with confusability graph ) is the following .alice wants to transmit the message ] .the transmission is successful if we always have that .we can now directly extend this protocol to the compound channel with receivers , where alice and the bobs share a -partite non - signaling probability distribution . to communicate message ] and are elements of a fixed clique covering of cardinality .additionally , we require for all ] and there exists a ] ) .as already mentioned , since every quantum strategy is also non - signaling , for every graph and , it holds that .[ thm : monogamy ] for all graphs , if then in order to prove the theorem , we use the monogamy of non - signaling distributions as derived in . for convenience , we reproduce the definition and result here . [ def : shareable ] a non - signaling probability distribution is called _-shareable with respect to bob _, if there exists an -partite non - signaling probability distribution + such that : 1 . for all permutations , we have that 2 .it holds that note that if both conditions hold , we have that for all ] ,whenever for all ], bob uses his output to identify a clique of that contains . he can then define a projective measurement on the orthogonal representations of the elements of such clique and recover with probability 1 by measuring .the protocol works if is such that , implying that step [ step3 ] can be performed .if this is the case , alice and bob can transmit a message set of size using the channel times .the protocol can be easily extended to the multi - receiver case .suppose that there are bobs and that alice shares an independent maximally entangled state with each of them . with uses of the channel , alicecan perfectly communicate one out of messages with zero - probability of error to all the bobs .in fact , with uses of the channel , alice sends any -tuple of vertices .she then teleports their orthogonal representations to each bob individually and uses the channel additional times to send the classical bits that complete teleportation .the -th bob will consider the -th use of the channel as his output and ignore the others .this protocol gives the lower bound : where we chose such that ( or equivalently ) . in this subsection, we present a graph family for which entanglement improves the zero - error capacity with finitely many bobs .these graphs were introduced in and , in , they were used to exhibit an infinite family of graphs for which exists separation between classical and entanglement - assisted capacity .let be an odd positive integer .quarter orthogonality graph _ has as vertex set the set of vectors such that and has an even number of -1 entries .the edge set consists of the pairs of orthogonal vectors .is called quarter orthogonality graph because it is an induced subgraph of the orthogonality graph ( as defined in section [ sec : not ] ) with one quarter of its vertices . ] for this graph , it is shown in ( see for a similar result ) that holds for where is an odd prime and .thus , for all and as above , .the map such that is an orthogonal representation of and , hence , . if is such that , we have and we can use the protocol from the previous section to communicate messages . a simple lower bound on the independence number of the graph can be derived considering the set of vertices that have ones in their first coordinates .it is easy to see that is an independent set , and that its cardinality is by recalling that the vertices of have an even number of entries , we show that the teleportation lower bound previously described gives a separation between and for certain and . for every odd integer and , with .note that if , we have from the discussion above that . therefore we can apply equation and obtain the desired bound , since .[ cor : compound ] consider any such that is an odd prime and .if then .easy algebraic manipulations give that for every where , it follows that our lower bound , for is strictly larger than the classical capacity up to , for up to , and the upper bound on tends to infinity as goes to infinity .in this section , we focus on a different zero - error communication scenario which can be thought as multiple senders with a single receiver .suppose there are alices , each of whom gets access to a classical channel which connects her to the single bob .we are interested in the total amount of messages that the senders , as a group , can transmit perfectly . at every stage of the communicationonly one of the alices uses her channel to communicate an input .we assume that inputs of one sender can not be confused with inputs from another sender .in other words , the receiver knows which one of the senders sent him the message .we want to study the maximum cardinality of a message set that the senders are able to perfectly communicate to the receiver when they are allowed to cooperate .equivalently , this communication scenario can be depicted as single - sender single - receiver where the sender can choose among channels to use for the communication . at every round of communication, the receiver learns both the output of the channel as well as which channel has been used .suppose that the -th alice is connected to bob through a channel with confusability graph .as noticed in , the confusability graph related to cooperating alices is given by the disjoint union .( intuitively , since inputs from different senders can not be confused there are not edges between vertices of and if . )the maximum size of a message set that can be perfectly transmitted with one use of the channel is . what happens if we allow multiple channel uses ?shannon showed that for every pair of graphs and , if and then and he conjectured that equality holds .however , alon showed that there exists a pair of graphs for which strict inequality holds . from an information - theoretical perspective , this example says that if the two senders are allowed to cooperate , the average number of messages they can communicate is strictly more than the sum of their individual possibilities .this result was extended by alon and lubetzky for a larger number of senders .they showed that it is possible to assign a channel to each sender such that only privileged subsets of senders are allowed to communicate with high capacity .-th alice has access to a classical channel . if the alices want to communicate the message to bob , one of them performs a measurement ( that depends on ) on her part of the entangled state .the outcome indicates that the -th alice should use her channel to send input .bob receives an outcome and , by assumption , he knows that channel has been used for the communication .he can then perform measurement , which depends on and , and outputs .the protocol works if is equal to with zero probability of error ., width=12 ] we study this problem in the entanglement - assisted setting focusing on the particular case where all alices have access to the same channel with confusability graph .since the senders are cooperating and there is no restriction on the amount of shared entanglement in this model of communication , we can assume that only one of the senders actually performs quantum operations on the entangled state .hence , we can without loss of generality assume the quantum state to be bipartite and the entanglement - assisted strategy to be used to send messages through a classical channel with confusability graph . recall that denotes the disjoint union of copies of the graph .the protocol is depicted in figure [ fig : multi - sender ] .[ def : qmultiuser ] for a graph define the _ entanglement - assisted multi - sender independence number _ with senders as .the _ entangled multi - sender shannon capacity _ with senders is which by definition is equal to .a useful observation is that for every and .indeed , each alice can individually communicate messages using entanglement and in our model bob learns automatically which alice performed the communication .therefore , the cooperating alices can communicate at least one among messages with one use of the channels and entanglement .somewhat surprisingly , we present in section [ sec : viol ] an example of a graph for which senders have a better joined strategy . in other words , there is a graph and for which .this does not happen in the classical case where , using analogous notation , we have and for every and . the latter equality is also mentioned in and it follows from the fact that the strong graph product distributes over the disjoint union , _i.e. _ , for every ( see for example for a proof ) , which in particular implies that . using this last equality , we have in this section , we show the following : for every graph with a separation between the classical and entanglement - assisted capacity , there is also a separation in the multi - sender setting independently of the number of senders ( theorem [ thm : sep_c_l ] ) .the same type of result holds when we restrict to single use of the channel ( lemma [ lem : sep_alpha_l ] ) .the latter is to be expected since we mentioned above that there is an easy quantum strategy that allows the alices to communicate messages with one use of the channels .[ lem : sep_alpha_l ] for any graph such that , we have for every . each alice can individually communicate messages using entanglement .since bob also learns which alice has sent him the message , we have an easy strategy that allows to send one among messages with entanglement .hence , we have [ thm : sep_c_l ] for any graph such that , we have for every .recall that and that for every and graph .therefore , we get in this section we show the existence of a graph and natural number for which .this means that there is an entanglement - assisted strategy for cooperating senders which is strictly better than the sum of their best individual strategies .more generally , we are able to prove that there exist graphs for which cooperation among the senders allows a better entanglement - assisted strategy for any finite number of channels uses .this is a peculiar property of the entanglement - assisted setting since in the classical case and always hold .we currently do not know whether this improvement gained by cooperation in the entanglement - assisted setting extends also to the asymptotic regime . in order to prove the result, we need to briefly describe a different two - party entanglement - assisted communication scenario .we defer to for a more detailed explanation .let be a graph .suppose that alice receives a vertex and bob receives ( as side information ) a clique under the promise that .alice can send classical messages to bob without error .what is the minimum cardinality of a message set that alice has to use to communicate to bob such that he can perfectly learn alice s input ?it is straightforward to check that in the classical scenario , the minimum cardinality is given by the chromatic number . given an optimal coloring of the graph , alice can simply send the color corresponding to to bob . by definition ,the color is sufficient to learn , because elements of a clique all have different colors .conversely , any deterministic strategy yields a coloring of the graph .we can assume an optimal strategy to be deterministic as we are considering the zero - error scenario .similarly , the _ entanglement - assisted chromatic number _ is the minimum cardinality of a message set that alice has to send to bob such that he can perfectly learn when alice and bob can share an arbitrary entangled state .the parameter was introduced in to quantify the amount of communication needed in the above mentioned scenario .there it is shown that and for every graph and .we will need the following technical lemma .recall that is the cartesian product graph between and the complete graph .[ lem : qchromqindep ] for any graph , if then first , we prove the inequality for any .let . for every , we have .this chain of inequalities uses the fact that upper bounds , is a subgraph of and is monotone non - decreasing under taking subgraphs , is multiplicative under strong graph products and that and .let and suppose that alice and bob are connected through a one - way classical channel with confusability graph .we present a strategy which allows alice to communicate messages through this channel with the help of entanglement , thus implying that .suppose that alice wants to send message to bob .using the strategy for the entanglement - assisted chromatic number , alice makes a measurement on her part of the entangled state and gets an outcome ] .suppose that from his channel output bob infers that alice s input is an element of the set where is a clique of containing .since bob learns , he can use message to finish the protocol of the entanglement - assisted chromatic number . as mentioned above the protocolallows bob to recover with zero probability of error . for the other case ,bob from his output learns that alice s input is an element in the set $ ] with .then he can directly deduce that is the original message used by alice .hence , we have shown an entanglement - assisted protocol that allows to perfectly communicate classical messages through a channel with confusability graph .we conclude that if then . combining this last inequality withthe one derived at the beginning of the proof , we can conclude .as a side remark , there is an analogous classical counterpart of the statement in lemma [ lem : qchromqindep ] : if then .its proof can be derived using a reasoning similar to the proof above or by a simple graph - theoretic argument .the following lemma is proven in a more general context in .[ lem : sepdisjoint ] suppose is a graph such that and let . then , . from lemma [ lem : qchromqindep ] ,we get where the last inequality follows from the fact that is a subgraph of and that is monotone non - decreasing under taking subgraphs .recall from section [ sec : not ] that the orthogonality graph has all the vectors as vertex set and two vectors are adjacent if orthogonal . from , we know that and if is a multiple of four .consider the orthogonal representation with that maps vertices of to the unit sphere and adjacent vertices to orthogonal vectors .since is upper bounded by the minimum dimension of an orthogonal representation in which all the entries of the vectors have equal moduli , we have that , and thus , for every multiple of four .we show the existence of a graph and for which .hence , cooperating senders can communicate strictly more ( with one use of a channel and entanglement ) than the sum of what they can communicate individually .[ thm : disjointalpha ]let be the orthogonality graph with a multiple of four but not a power of two .then . from the reasoningabove we know that .moreover , . using a similar argument as in , we get that since using lemma [ lem : sepdisjoint ] we conclude that . with a similar reasoning, we can prove that for every finite number of uses of the channel , cooperation among the players improves the entanglement - assisted communication .let be the maximum cardinality of a message set that alices can communicate perfectly to bob with uses of the channel and entanglement .in the next lemma , we show that there exist a graph and such that for every .this is equivalent to saying that there exists a channel and a certain number of senders for which cooperation among the senders strictly improves the communication of channel uses for every .[ thm : disjointalpha_n ] let be the orthogonality graph with a multiple of four but not a power of two .then , for every . using the properties of lovsz theta number presented in section [ sec : not ] , we have that and for every . then by sub - multiplicativity of and since , we have .this implies that for any integer , applying lemma [ lem : sepdisjoint ] , we then have that for every .we can conclude that have studied the effects of entanglement in two multi - party channel - coding problems . for the compound channel setting ( with one sender and multiple receivers )we have shown that entanglement can only help if the number of receivers is below a certain threshold which depends only on the channel . if there are more receivers , entanglement does not help for zero - error communication for any finite number of channel uses . in the second situation , where there are multiple senders and one receiver ,we have shown that there are channels for which entanglement always improves the communication . in both these situations , we assume that the multiple parties have access to identical channels . the first natural question to ask is about the effect of entanglement when the channels are different .is it possible to extend to the entanglement - assisted setting the results obtained by ( for multiple receivers ) and ( for multiple cooperating senders ) ?these problems seem difficult as we only have a limited understanding of the behavior of the parameters and .two more specific questions are related to theorem [ thm : qmonogamy ] in section [ sec : rec - neg ] .can we find a better bound on the number of receivers for which the entanglement - assisted capacity is equal to the classical capacity in the compound channel scenario ?the bound we have obtained is the edge - clique cover number , but this bound is derived using non - signaling distributions and therefore in a more general context than the entanglement - assisted setting .furthermore , it would be interesting to know whether an asymptotic version of theorem [ thm : qmonogamy ] holds .does the parameter tend to the classical capacity when the number of receivers goes to infinity ? in general, finding better bounds for the parameter is interesting .can we find a new general protocol ( like the one based on teleportation in section [ sec : tele ] ) that gives a better ( or incomparable ) lower bound on ? in a similar spirit , can we find an upper bound on which is different from the lovsz theta number ?note that such an upper bound is known for the classical parameter and it was found by haemers .a related question is to find a graph such that .an approach to this latter problem is to find a pair of graphs , and ( not necessarily distinct ) , for which where and .such a result would be in the same spirit as our findings on the parameter described in section [ sec : viol ] .the authors thank jop brit , harry buhrman , monique laurent , laura maninska , fernando de melo , andreas winter and ronald de wolf for useful discussions .we also thank the anonymous referees for their excellent comments and suggestions to improve this article , in particular for the operational proof of lemma [ lem : qchromqindep ] .was partially funded by the european project siqs .g.s . was supported by the european commission ( strep raquel ) .part of this work was done when g.s . was a phd student at cwi amsterdam , supported by ronald de wolf s vidi grant from nwo .c.s . was supported by an nwo veni grant .
|
we study the effects of quantum entanglement on the performance of two classical zero - error communication tasks among multiple parties . both tasks are generalizations of the two - party zero - error channel - coding problem , where a sender and a receiver want to perfectly communicate messages through a one - way classical noisy channel . if the two parties are allowed to share entanglement , there are several positive results that show the existence of channels for which they can communicate strictly more than what they could do with classical resources . in the first task , one sender wants to communicate a common message to multiple receivers . we show that if the number of receivers is greater than a certain threshold then entanglement does not allow for an improvement in the communication for any finite number of uses of the channel . on the other hand , when the number of receivers is fixed , we exhibit a class of channels for which entanglement gives an advantage . the second problem we consider features multiple collaborating senders and one receiver . classically , cooperation among the senders might allow them to communicate on average more messages than the sum of their individual possibilities . we show that whenever a channel allows single - sender entanglement - assisted advantage , then the gain extends also to the multi - sender case . furthermore , we show that entanglement allows for a peculiar amplification of information which can not happen classically , for a fixed number of uses of a channel with multiple senders .
|
the understanding and appropriate modeling of the transport of charged energetic particles in turbulent magnetic fields remains one of the long - standing challenges in astrophysics and space physics .the many simultaneous in situ observations of both the heliospheric magnetic field and different energetic particle populations make the heliosphere a natural laboratory for corresponding studies .of particular interest in this context are galactic cosmic rays ( gcrs ) and jovian cosmic ray electrons . while the former traverse the whole three - dimensionally structured heliosphere and allow for studies of its large - scale variations as well as of the evolution of heliospheric turbulence ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , the latter represent a point - like source and are , thus , well suited for analyses of anisotropic spatial diffusion ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?+ in order to exploit this natural laboratory in full , it is necessary reproduce the measurements with simulations that do contain as much as possible of the three - dimensional structure of the plasma background within which the cosmic rays are propagating .significant progress has been made regarding the modeling of the quiet solar wind ( e.g. , * ? ? ?* ) but much remains to be done to implement the many features that are structuring the solar wind and the heliospheric magnetic field , into transport models of cosmic rays .particularly interesting structures are the corotating interaction regions ( cirs ) that are formed during the interaction of fast solar wind streams from coronal holes with preceding slow solar wind and usually persist for several solar roations ( e.g. , * ? ? ?* and references therein ) .these structures not only lead to particle acceleration , but also to the modulation of gcr and jovian electron spectra . indeed ulysses measurements , as described in , e.g. , , at high heliolatitudes during the first orbit of the spacecraft around the sun indicated that cirs represent the major constituent for the three - dimensional heliospheric structure close to solar minimum ( e.g. * ? ? ?a major surprise came from the measurements of accelerated energetic particles and gcrs that showed clear periodic signals even above the poles of the sun where no in situ signals of cirs were registered by the plasma or magnetic field instrument .electron measurements , however , indicate no variation at these region . it can be speculated that the differences are caused by the fact that both gcrs and locally accelerated particles have an extended source , while mev electrons in the inner heliosphere originate from a point - like source as mentioned above .thus , there is a different influence of cirs on gcr and jovian electron flux variations as investigated recently by .+ if the electron source is a well - localized point - like source in the heliosphere namely the jovian magnetosphere it is mandatory to describe the structure of the plasma stream in the whole inner heliosphere up to several au and not only at the location of different spacecraft measuring these particles. separated by 30 degrees .squares and triangles mark the positions of the spacecraft for doy 213 and 267 , respectively .right : projection of trajectories in the ecliptic plane for doy 267 .[ adapted from ].[fig-1],title="fig:",scaledwidth=50.0% ] separated by 30 degrees .squares and triangles mark the positions of the spacecraft for doy 213 and 267 , respectively .right : projection of trajectories in the ecliptic plane for doy 267 .[ adapted from ].[fig-1],title="fig:",scaledwidth=50.0% ] .the ulysses electron intensities have been multiplied by a factor of two.[fig-2 ] ] a unique constellation of spacecraft to investigate these intensity variation was present in august 2007 , when ulysses crossed the heliographic equator during its third so - called fast latitude scan .figure [ fig-1 ] ( left ) displays the trajectories of ulysses ( blue ) , stereo - b ( red ) , soho ( gold ) and stereo - a ( green ) in ecliptic coordinates .the right panel of that figure shows the spacecraft position projected onto the ecliptic plane on september 24 .the dotted and colored lines display parker spirals using a velocity of 400 km s .+ figure [ fig-2 ] displays the corresponding solar wind and mev electron measurements by the swoops and swepam instruments as well as the mev electron fluxes from the cospin / ket and the costep / ephin detectors aboard ulysses , ace , and soho , respectively , for the period of interest .while a recurrent structure of the electron intensities is present for the whole period shown in figure [ fig-2 ] , the ulysses measurements only show such variation when the spacecraft is embedded in a cir region .a first analysis of the ephin measurements has been reported by , who showed that over half a year of measurements the electron flux can be correlated or anti - correlated with the solar wind speed depending on the spacecraft position relative to jupiter . investigated the spatial and temporal variation of these cir structures and concluded that the measurements are highly dependent on the spacecraft latitude and the evolution of the coronal hole structure .thus , in order to interpret simultaneous measurements from different locations in the heliosphere , a detailed knowledge of the plasma background is needed .the background plasma and magnetic fields in which the transport of energetic particles is modeled can to a first approximation be prescribed by the simple and well - known parker model .also , attempts have been made to describe cirs analytically , assuming them to be stationary in a frame corotating with the sun , e.g. by .such a treatment was recently pursued by to investigate recurrent 27-day decreases in jovian electron counts ( see also * ? ? ?+ a far more realistic modeling of the heliospheric environment can be obtained with mhd simulations : while , physically , the heliospheric magnetic field ( hmf ) originates in the sun , it is conceptually customary to distinguish for modeling purposes between ( i ) the coronal magnetic field filling the region from the solar surface out to a spherical so - called heliobase at several ( tens of ) solar radii and ( ii ) the hmf beyond .there are two popular modeling approaches for the coronal magnetic field , namely potential field reconstructions and mhd models ( see * ? ? ?the latter approach is computationally more challenging but can account for more physics , direct time - dependence and self - consistency .there are numerous examples for such mhd modeling of the coronal magnetic field , including the work of , e.g. , .+ the second popular approach for deriving solar wind conditions utilizes potential field reconstructions of the coronal field that account for the solar wind s influence by introducing a so - called source surface beyond which the field is required to be purely radial .the basic technique of potential field source surface ( pfss ) models , originally introduced by and , is still widely used and was found to provide a means for predicting solar wind speed at earth via the so - called fluxtube expansion factor of open coronal field lines .another quantity that can be derived from potential field models , the footpoint distance of an open field line to the nearest coronal hole boundary , was used by to empirically quantify the resulting solar wind speeds .combining such approaches and incorporating the schatten current sheet ( scs ) model to account for thin current sheets resulted in the so - called wang - sheeley - arge ( wsa ) model .a variety of versions of the wsa model predict solar wind speed distributions at different solar distances rather close to the sun from where the predictions must be propagated further outwards .earlier models used simple kinematic propagation schemes , while nowadays mhd codes are utilized since they can account for more physics needed , e.g. , for the proper modeling of stream interactions and shock formation .such combined models are in operation at space weather forecasting facilities such as the community coordinated modeling center ( ccmc ) or the space weather prediction center ( swpc ) . + the main advantage of the latter empirical models are their significantly reduced computational costs as compared to coronal mhd models .additionally , the empirical models avoid the problems arising due to sub - alfvnic solar wind speeds complicating boundary conditions as perturbations may travel back towards the photosphere and the issue of coronal heating .furthermore , it was demonstrated by that pfss solutions often closely match those obtained by numerical mhd models .+ in the present study , we are mainly interested in the influence of cirs on the transport of energetic particles from distant sources towards the earth ( galactic cosmic rays , jovian electrons ) , such that a detailed coronal model is not mandatory . in this light we postpone the implementation of a coronal mhd model to future studies and instead use the empirical wsa model as input to mhd simulations in a domain from 0.1 au to 10 au and possibly further out .this will provide a realistic heliospheric environment for the sde transport code to study propagation of energetic particles , which will be adressed in the second paper of this series .the paper at hand describes the mhd modeling and is organized as follows : + section [ sec : code ] briefly describes the cronos mhd code in the specific application to heliospheric modeling .this setup is validated in section [ sec : pizzo ] where we compare results for analytically prescribed boundary conditions for cirs with those originally obtained by for the same setup .section [ sec : wsa ] gives an overview of the wsa model comprising potential field modeling and a set of empirical formulae to derive the inner boundary conditions for the mhd simulations .the acquisition of data from in situ measurements from the stereo and ulysses spacecraft and the comparison with our results is adressed in section [ sec : results ] before giving our conclusions and an outlook on future tasks .the tool of choice for our simulations is the state - of - the - art mhd code cronos , which has been used in recent years to model astrophysical ( e.g. ism turbulence , accretion disks ) and heliospherical scenarios . amongst its main features the code employs a semi - discrete finite - volume scheme with runge - kutta time integration and adaptive time - stepping , allowing for different approximate riemann solvers .the solenoidality of the magnetic field is ensured via constrained transport , provided the magnetic field is initialized divergence - free .supported geometries are cartesian , cylindrical , and spherical ( including coordinate singularities ) with the option for non - equidistant grids . here, we use spherical coordinates ( ,, ) with the origin being located at the center of the sun .thus , is the heliocentric radial distance , ] is the azimuthal angle . corresponds to carrington longitude in this paper , except for the test case in section [ sec : pizzo ] , where a reference longitude is arbitrary .the code runs in parallel ( mpi ) and supports the hdf5 output data format . + in its basic setup , the code solves the full , time - dependent , normalized equations of ideal mhd & = & { \bf f } \label{eq : mom}\\ \partial_t e + \nabla \cdot \left [ ( e+p+ |{\bf b}|^2/2 ) \ { \bf v } - ( { \bf v } \cdot { \bf b } ) { \bf b } \right ] & = & { \bf v}\cdot{\bf f } \\\partial_t { \bf b } + \nabla \times { \bf e } & = & { \bf 0 } \end{aligned}\ ] ] where is the mass density , is the fluid velocity , and describe the electromagnetic field , is the total energy density , and is the scalar thermal pressure . +additional force densities can be introduced by the user . in our setupthese are the gravitational force density and , since it is convenient to perform calculations in a frame of reference corotating with the sun , the fictitious forces , where .since we know of no consistent way to connect the sun s observed differential surface rotation from the photosphere to the lower radial boundary of our computational domain , we are forced to neglect this effect and , therefore , choose a constant solar angular rotation speed .furthermore , denotes the unit tensor , and the dyadic product is used in ( [ eq : mom ] ) .+ amongst the closure relations an adiabatic equation of state is used with an adiabatic exponent , which is the same value as used in the heliospheric part of the enlil setup .as mentioned in the introduction , the modeled heliospheric background will be used as an input for a sde transport code in order to investigate the propagation of energetic particles in a forthcoming study . during solar quiet times , cirs are the most prominent agents that can have significant influence on the transport coefficients as they act as diffusion barriers due to the associated magnetic field enhancements .we , therefore , first demonstrate the capability of our setup to model these structures .further detailed model validation for the application of the cronos code to inner - heliospheric scenarios were performed by .+ there are no exact analytic expressions for the plasma quantities in cir associated structures , although there exist simplified models ( e.g. , see also ) , but the expressions given therein perform a heuristical fit to data and lack a description of the cir associated shocks .therefore , to validate our model we instead compare with results obtained with previous numerical simulations , namely the pioneering work by who investigated a variety of different steady - state scenarios , one of which will be summarized and compared with here .the inner radial grid boundary is chosen to be located at au , well beyond the critical surfaces , so that the solar wind speed is super - magnetoacoustic everywhere .therefore , perturbations can not travel towards the sun and constant ( corotating ) boundary conditions can be chosen .a circular regional patch of fast , tenuous and hot wind centered at the equator is embedded into the ambient slow , dense and cool wind , providing the basic ingredients for a cir ( see figure [ fig : pizzo_overview ] to get a first expression ) .the inner radial boundary conditions are shown in an equatorial slice in figure [ fig : pizzo_slice ] ( a ) and are ascribed as follows .the formula used to specify radial velocity at the inner boundary in our setup reads with km / s and ] .for the results presented below , the simulation results have been copied and appended to cover the whole interval .the polar regions are not significant here and are excluded to avoid small time - steps , thus ] grid points , so that the angular resolution matches that of the magnetograms .the resulting grid s radial resolution agrees with the stepsize in tracing magnetic field lines in the wsa model ( c. n. arge , personal communication ) .the outer half - sphere represents the source surface .open magnetic field lines reach the source surface and are mainly confined to polar regions ; however , excursions to lower latitudes are visible as well . ]the resulting potential field configuration for cr2060 is shown in figure [ fig : pfss ] : the reconstructed photospheric magnetogram is shown on the inner sphere ( see top panel of figure [ fig : cs ] for a full map ) , where the data range is restricted to gauss to better illustrate the small scale structures ( active regions in this magnetogram have maximum values up to 500 gauss ) .the outer half - sphere represents the source surface .selected field lines display rather typical solar minimum conditions with open field lines in polar regions ( giving rise to the fast polar wind ) and closed loops towards the ecliptic plane ( resulting in a slow wind there ) .excursions of open field lines to lower latitudes give rise to high - speed streams there that will interact with the ambient slow wind to form cirs .+ it has been demonstrated that pfss solutions often closely match respective mhd results .one shortcoming of the pfss models , however , is that they do not produce a thin current - sheet , which can be seen in figure [ fig : cs ] , middle panel , showing a map of the source surface radial magnetic field where the transition between the different polarities is rather broad .this is in contrast to observations that show sharp and thin current sheets at magnetic field polarity reversals . to overcome this problem, the wsa model further utilizes the schatten current sheet ( scs ) model to compute the magnetic field beyond the source surface : the magnetic field at the source surface is first re - orientated where necessary to point ( radially ) outward everywhere , and is then used as a boundary condition for another potential field approach from which respective spherical harmonic coefficients are calculated .in contrast to the reconstruction of the highly structured magnetograms , the spherical harmonics approach does not suffer from ringing artifacts in this case , since a small maximal order of 9 is sufficient and applied here .the initial orientation has to be restored in the resulting configuration , which can be achieved by tracing field lines from the outer boundary of the schatten model at au back to the source surface .the resulting map of the radial magnetic field ( figure [ fig : cs ] , bottom panel ) at is topologically similar to the one at the source surface with smaller maximal tilt angles of the current sheet .the map is used in defining the inner boundary condition in the mhd calculations as described in the next section . to derive boundary conditions for the remaining plasma quantities at au ,a set of empirial formulas is employed , which are largely based on the topology of the coronal potential field configuration . found an inverse relationship between the flux - tube expansion factor and the resulting solar wind speed : large expansion factors ( low solar wind speeds ) are usually associated with field lines that have their photospheric footpoints at coronal hole boundaries ( i.e. the boundary between open and closed field lines ) , while small expansion factors ( high solar wind speeds ) are associated with those originating from deep within a coronal hole .therefore , the further away an open field line footpoint is located from a coronal hole boundary the higher is the resulting solar wind speed .this can be expressed in terms of the quantity , the footpoint distance to the nearest coronal hole boundary , introduced first by .it was since then found that using both and in conjunction gives better results than either one alone .+ the actual computation of and requires field lines being traced back to their respective footpoints in the photosphere at ( ) .our algorithm implemented for tracing the field lines employs an adaptive step - size method inherent to embedded runge - kutta ( rk ) methods , where in our setup the maximal allowable deviation from unity ratio taken of 5th order rk and embedded 4th order rk is used to determine the step - size . for the tracing in the pfss domain below we used and below , respectively , resulting in stepsizes in the range from 0.1 to 0.01 ( with the lower limit corresponding to the grid s cellsize ) .a comparison using everywhere yielded no difference in resulting footpoint locations within , which is far below the magnetograms resolution .similarly , for the scs models domain beyond 2.5 step - sizes can go up as high as 0.8 for .the resulting photospheric footpoint locations are shown in figure [ fig : footpoints ] ( a ) : the red / green dots indicate footpoints with positive / negative polarity , respectively . besides the large coronal holes in the polar regions there are excursions to equatorial latitudes , which are the sources of respective high - speed streams there .also shown are the highest closed fieldlines in blue , which are traced in both directions from just below the source surface ( ) and characterized as closed if the photosphere is reached in both directions .a qualitative comparison can be made with similar plots available on the gong website , which is shown for cr2060 in panel ( b ) of figure [ fig : footpoints ] .even though there are slight differences in the details , the global topology is very similar to that in panel ( a ) , especially concerning the locations of the equatorial extensions of open fieldline footpoints and regions with fieldlines closing below .differences may arise due to the different potential field approaches ( the plot from the gong webpage uses spherical harmonics ) and the method to calculate the field lines .+ the coronal hole boundary can now be defined to be situated where footpoints of open field lines are adjacent to those of closed field lines .this was achieved by binning the respective photospheric map into a grid and then labeling cells based on whether or not they contain open field lines .the coronal hole boundary is defined where an open cell has at least three of its eight neighboring cells marked as closed. we calculate the distance to the nearest coronal hole boundary ( chb ) for each footpoint of an open field line ( fp ) by taking the distance along a great circle to all cells marked as coronal hole boundary and choosing the smallest value . + the computation of the fluxtube expansion factor is carried out along with the determination of open footpoints by taking the respective magnetic field values at the source surface and the photosphere as connected by a field line and using formula ( [ eq : fs ] ) .+ the set of empirical formulae to determine the boundary and initial conditions for the remaining mhd quantities is similar to the one used by , with the following adaptions : + the formula for radial velocity now reads as discussed by .a number of such formulae can be found in the literature ( e.g. * ? ? ?* ; * ? ? ?* ) , differing in form as well as in scaling parameters .the parameters have values km / s , km / s , and at swpc while found km / s , km / s , and by fitting solar wind velocity distributions .we performed a rough tuning to be in better agreement with the spacecraft data and found for our setup more suitable values km / s , km / s , and ] in latitude .this gives a runtime hours on a 64 core cluster , while the physical convergence time hours , which is estimated from the slowest wind ( / s ) to propagate to the outer boundary .taking the simulation to larger radial distances for usage in the sde code is straightforward , but requires the grid to be coarsened to maintain reasonable required computer resources .the polar regions could be included as well , but this would further restrict the global timestep in the simulations due to small cell sizes as towards the poles .the simulation results of the solar wind speed , density , and temperature as well as of the magnetic field are compared to spacecraft data of both the ulysses and the two stereo spacecraft .stereo - a(head ) leads earth while stereo - b(ehind ) trails earth . in detail, the ulysses data are based on measurements from the ulysses solar wind observations over the poles of the sun ( swoops , ) and the magnetometer onboard ( vhm , ) . for the stereo - a / b twin spacecraft , measurements were taken from the plasma and suprathermal ion composition ( plastic , ) instrument and the spacecraft s magnetometers ( mag , ) .+ we first discuss the results for cr 2060 and only briefly address the results for cr2059 and cr2061 presented afterwards .+ for a comparison with the spacecraft data the mhd results have been interpolated along respective spacecraft trajectories .panel ( a ) in figure [ fig : comp ] shows the resulting map for the radial velocity at the radial distance of the stereo - a spacecraft ( at au and separated in longitude from earth by less than 15 , see http://stereo-ssc.nascom.nasa.gov/where.shtml ) .the white line indicates its trajectory as the spacecraft traverses from right to left as indicated by the corresponding day of year ( doy ) used as the horizontal axis .a respective map for stereo - b is very similar to panel ( a ) and is , therefore , omitted .instead panel ( b ) shows the equivalent map for the ulysses spacecraft , which at that time performed a fast - latitude scan and was located at heliocentric distances of au .quantitative comparisons with the in situ measurements are presented in the bottom panels ( c)-(e ) , where the quantities shown are ( top to bottom ) radial velocity , particle number density , temperature , radial magnetic field , and magnetic field magnitude .the spacecraft data ( red lines ) have been averaged to the one degree angular resolution of the simulated data ( black lines ) .a quite good match is found along the orbit of stereo - b ( panel ( d ) ) , where the prominent high - speed streams centered around doys 239 and 246 are captured in terms of magnitude and stream width , while a third high - speed stream at doy 250 is somewhat underestimated .furthermore , the simulation data shows an additional feature at doy 236 , which is not present in the spacecraft data , and can be identified as an excursion of the northern coronal hole in panel ( a ) .a similar behavior is found for stereo - a .here we demonstrate the effect of artificially shifting the spacecraft position slightly ( 4 south in latitude ) which results in the dashed blue curve .the comparison seems significantly improved as the additional features prominence is mitigated while the observed stream at doy 250 is now captured very acurately .this shows that a comparison strictly along a spacecraft trajectory may not always be satisfactory at first glance , but that a thorough inspection of such maps as presented in panels ( a ) and ( b ) can help identifying the actually observed ones , which may just be slightly off the trajectory in the simulation ( see section [ sec : conclusions ] for further discussion ) .the comparison with ulysses data for radial velocity is also somewhat dominated by the additional feature around doy 239 , however , apart from that the comparison is quite satisfactory throughout the course of this cr .+ the pressure associated quantities , , and exhibit magnitudes of the correct order with respective strong enhancements in compression regions associated with the forming cirs at the leading edges of high - speed streams .the magnetic field strength , however , seems to be systematically underestimated and might have to be increased in future simulations .+ another interesting quantity is the radial magnetic field component , through which sector boundaries ( i.e. current sheet crossings ) can be identified , which is difficult , however , due to rapid fluctuations in the spacecraft data .still , the average polarity and sector boundaries are captured fairly well , e.g. for stereo - b , current sheet crossings occur at doy 243 and 250 in good agreement with the measurements .figure [ fig : comp2 ] shows results for cr2059 in the same format as figure [ fig : comp ] .most features are captured relatively well in the stereo comparisons with the largest deviations occuring at the beginning and end of this cr where the simulation data show a high - speed feature not seen in the measurements . a shift in latitude for stereo - a shows again that a small deviation from the actual trajectory gives better results and proves the presence of respective high - speed features , which are just slightly offset .the comparison with ulysses data shows excellent agreement .+ the results for cr2061 are shown in figure [ fig : comp3 ] which are similar to the ones for cr2060 along the stereo trajectories , and satisfactory agreement is found .ulysses , on its way to northern polar regions , encounters predominantly the fast solar wind coming from the respective northern coronal hole .the simulation data along its trajectory , however , does not show the return to slow - speed wind occuring twice .again , these are present in the global topology ( see panel ( b ) ) but are located too far south so that an 8 degree shift south in latitude is necessary to produce results more similar to the measurements . + we believe to have found a reasonable agreement with spacecraft data at relatively small heliocentric distances and , therefore , we feel that the results we obtain at greater distances can be trusted to also be a good representation to the local solar wind conditions ..,scaledwidth=80.0% ] an arising problem , however , is the common assumption of stationary boundary conditions during the course of a cr .on the one hand , this is reasonable for propagating solutions out to 1 - 2 au only , because the solar wind takes a relatively short time to propagate there ( days / au ) as compared to the duration of a cr ( days ) . on the other hand ,when extending the simulations to larger radial distances , stationary boundary conditions may become unreasonable if there is a significant change from one cr to the next one , because the propagation time becomes comparable to and eventually even larger than the duration of a cr , so that the heliosphere is filled with solar wind whose composition changes according to the different boundary conditions .this latter effect will have to be considered when we extend the simulation box to larger radial distances , so that time - dependent boundary conditions have to be applied .this will , however , take considerably longer computation time .a simpler approach appears to be possible for cr2060 - 2061 , because the boundary conditions change rather little and stationary boundary conditions still appear to be a reasonable assumption .a respective simulation was carried out applying constant inner radial boundary conditions of cr2060 and extending the simulation box in the radial direction to 10au with a resolution of =[2r_\odot,2^\circ,2^\circ]$ ] .+ an equatorial slice of the simulation box depicting radial velocity is shown in figure [ fig:2060_eq ] .three initially distinct high - speed streams can be identified that with increasing heliocentric distance begin to interact and form a cmir with a large angular extent .this is shown in a quantitative manner in figure [ fig:2060_evolution ] , which is similar to figure [ fig : pizzo_evolution ] as it shows radial velocity ( black solid ) and total pressure ( blue dashed ) at the equator at radial distances from 1.5 to 9.5 au .the horizontal axis uses a longitude , which is carrington longitude minus in order to show the evolution and interaction out to 6.5 au without hitting the longitudinal periodic boundary .only one of the three initial high - speed streams at 1.5 au shows a strong compression region at its stream interface ( si1 , green dashed - dotted ) . at 2.5 au two stream interfaces withrespective forward ( fs orange , dashed / dotted ) and reverse shocks ( rs , red dashed / dotted ) propagating away from them can be seen .fs1 and rs2 move towards and finally through each other between 3.5 and 4.5 au and a cmir is formed , which further interacts with the third initial high - speed stream s shocks a little beyond 8 au .the only prominent structure left at 9.5 au is a large merged interaction region bounded by the forward and reverse shocks ( fs1 and rs1 ) of the initially steepest high - speed feature with a complicated internal structure as a result of the merging process .the influence of such complicated structures on particle transport will be interesting to investigate .we demonstrated the capability of the mhd code cronos to model cir - associated structures ( compression regions , shock pairs ) in a test case where we are in agreement with the earlier results by . to model more realistic solar wind conditions , we used gong magnetograms and the fdips potential field solution as input to the wsa model to derive inner boundary conditions for our mhd code . to our knowledgethis is the first time that the wsa model is used in conjuction with the fdips model , which can make use of the full resolution of a given magnetogram without introducing numerical artifacts that can arise in the usual spherical harmonics expansion of the coronal potential field .our results could be shown to be in reasonable agreement with spacecraft data .+ other studies have looked at cr2060 using input from the wsa model . in one example, validated their findings by comparison with ace and messenger spacecraft data , though no comparison with ulysses data was performed to validate out - of - ecliptic results .the agreement of the in - ecliptic results is comparable to the one found here .ulysses comparisons for this time period were performed by , but the focus was on single cir structures instead of investigating the whole cr .therefore , in performing simulations to reproduce simultaneous multi - spacecraft observations including out - of - ecliptic data and also considering temporally adjacent crs our modeling extends previous work and provides a suitable framework for a subsequent study of 3d energetic particle transport .+ there are several reasons for occasional deviations when directly comparing simulation results to spacecraft in situ measurements which can essentially be attributed to the simplifications made in the model . amongst othersthe following reasons can be listed : first , it has been shown that results using input from different solar observatories may differ quite substantially . along this track an ensemble modeling technique taking into account results from different models and observatories , which when combined give a better solution .secondly , the empirical formulas used to set the inner radial boundary conditions are not well constrained and need some tuning that may depend on the observatories input data and the phase of the solar cycle .similarly , the pfss and scs models are rather crude estimates of the inner and outer coronal fields , and are also subject to empirical parameters such as the source surface radius , which may not be constant as commonly assumed , but may vary depending on angular position and solar cycle . a tuning of all these parameters could be performed to diminish differences between simulations and in - situ measurements , however , this does not give insight into the underlying physics on the one hand , and is also a very time consuming undertaking on the other hand .thirdly , since the whole solar surface is not visible from earth at a given time , synoptic magnetograms always contain non - simultaneous observations .flux - transport models ( e.g. ) may be an effective tool to model the unobserved hemisphere of the sun and improve magnetograms , however .these are not implemented in our model , but may be subject to future work . + a related problem is the common assumption of stationary boundary conditions during the course of a cr . while this remains reasonable for propagating solutions out to about 1 au only because the solar wind s crossing time is much smaller than the duration of a cr it becomes necessary to apply time - dependent boundary conditions when extending the simulations to larger radial distances so that the heliosphere is filled with solar wind with composition changes according to the changing coronal conditions .+ this latter effect will have to be considered when we extend the simulation box to larger radial distances , so that time - dependent boundary conditions may have to be applied .this will , however , take considerably longer computation time .a simpler approach presented here was taken for cr2060 - 2061 , because the boundary conditions change rather little and stationary boundary conditions might still be a reasonable assumption .time - dependent simulations and the inclusion of the poles will be adressed in an upcoming paper , where we will utilize the modeled 3d solar wind structure to investigate the transport of energetic particles , such as jovian electrons and galactic cosmic rays . financial support for the project fi 706/8 - 2 ( within research unit 1048 ) , as well as for the projects fi 706/14 - 1 and he 3279/15 - 1 funded by the deutsche forschungsgemeinschaft( dfg ) , and fwf - projekt i1111 is acknowledged .the stereo / sept chandra / ephin and soho / ephin project is supported under grant 50oc1302 by the federal ministry of economics and technology on the basis of a decision by the german bundestag .tobias wiengarten wants to thank nick arge and janet luhman for their helpful comments .acua , m. h. ; curtis , d. ; scheifele , j. l. ; russell , c. t. ; schroeder , p. ; szabo , a. ; luhmann , j. g. , 2008 , space science reviews , 136 , 203226 altschuler , m. d. , & newkirk , g. , 1969 , solar physics , 9 , 131 arge , c. n. , odstrcil , d. , pizzo , v. j. , & mayer , l. r. 2003 , solar wind ten , vol .679 , eds .m. velli , r. bruno , and f. malara ( melville , ny : aip ) , 190 conference series , 190 arge , c. n. , & pizzo , v. j. , 2000 , journal of geophysical research , 105 , 10465 bame , s. j. ; mccomas , d. j. ; barraclough , b. l. ; phillips , j. l. ; sofaly , k. j. ; chavez , j. c. ; goldstein , b. e. ; sakurai , r. k. , 1992 , astronomy and astrophysics supplement series , 92 , 237265 , l. f. , 1995 , interplanetary magnetohydrodynamics , oxford university press , new york balogh , a. ; beek , t. j. ; forsyth , r. j. ; hedgecock , p. c. ; marquedant , r. j. ; smith , e. j. ; southwood , d. j. ; tsurutani , b. t. , 1992 , astronomy and astrophysics supplement series , 92 , 221236 , a. and gosling , j. t. and jokipii , j. r. and kallenbach , r. and kunow , h. , 1999 , space science reviews , 89 broiles , t. w. ; desai , m. i. ; lee , c. o. ; macneice , p. j. , 2013 , journal of geophysical research : space physics , 118 , 47764792 , r. and carbone , v. , 2013 , living reviews in solar physics , 10 , 2 chenette , d. l. , 1980 , journal of geophysical research , 85 , 22432256 , o. , et al ., 2007 , , 654 , l163l166 dalakishvili , g. , kleimann , j. , fichtner , h. , & poedts , s. , 2011 , astron ., 536 , 11 detman , t. r. ; intriligator , d. s. ; dryer , m. ; sun , w. ; deehr , c. s. ; intriligator , j. , 2011 , journal of geophysical research : space physics , 116 , citeid a03105 dresing , n. ; gmez - herrero , r. ; heber , b. ; mller - mellin , r. ; wimmer - schweingruber , r. and klassen , andreas , 2009 , solar physics , 256 , 409 feng , x. ; yang , l. ; xiang , c. ; wu , s. t. ; zhou , y. ; zhong , d. , 2010 , , 723 , 300319 , x. , yang , l. , xiang , c. , jiang , c. , ma , x. , wu , s. t. , zhong , d. , and zhou , y. , 2012 , solar physics , 279 , 207229 ferreira , s. e. s. ; potgieter , m. s. ; burger , r. a. ; heber , b . ; and fichtner , h. , 2001a , journal of geophysical research , 106 , 2497924988 , s. e. s. ; potgieter , m. s. ; heber , b. ; fichtner , h. ; burger , r . a. and ferrando , p. , 2001b, advances in space research , 27 , 553558 galvin , a. b. ; kistler , l. m. ; popecki , m. a. ; farrugia , c. j. ; simunac , k. d. c. ; ellis , l. ; mbius , e. ; lee , m. a. ; boehm , m. ; carroll , j. ; crawshaw , a. ; conti , m. ; demaine , p. ; ellis , s. ; gaidos , j. a. ; googins , j. ; granoff , m. ; gustafson , a. ; heirtzler , d. ; king , b. ; knauss , u. ; levasseur , j. ; longworth , s. ; singer , k. ; turco , s. ; vachon , p. ; vosbury , m. ; widholm , m. ; blush , l. m. ; karrer , r. ; bochsler , p. ; daoudi , h. ; etter , a. ; fischer , j. ; jost , j. ; opitz , a. ; sigrist , m. ; wurz , p. ; klecker , b. ; ertl , m. ; seidenschwang , e. ; wimmer - schweingruber , r. f. ; koeten , m. ; thompson , b. ; steinfeld , d. , 2008 , space science reviews , 136 , 437486 gressl , c. ; veronig , a. m. ; temmer , m. ; odstril , d. ; linker , j. a. ; miki , z. ; riley , p. , 2013, solar physics , online first giacalone , j. ; jokipii , j. r. ; kta , j. , 2002 , , 573 , 845850 , j. , cameron , r. , schmitt , d. , and schssler , m. , 2010 , , 709 , 301307 , b. ; sanderson , t. r. and zhang , m. , 1999 , advances in space research , 23 , 567579 , b. and fichtner , h. and scherer , k. , 2006 , space science reviews , 125 , 81 - 93 kissmann , r. , flaig , m. , & kley , w. 2011 , 5th international conference of numerical modeling of space plasma flows ( astronum 2010 ) , vol .n. v. pogorelov ( san francisco , ca : asp ) , 536 kissmann , r. ; kleimann , j. ; fichtner , h. ; grauer , r. , 2008 , monthly notices of the royal astronomical society , 391 , 15771588 kissmann , r. ; fichtner , h. ; ferreira , s. e. s. , 2004 , astronomy and astrophysics , 419 , 357363 kleimann , j. , kopp , a. , fichtner , h. , & grauer , r. , 2009 , annales geophysicae , 27 , 989 khl , p. ; dresing , n. ; dunzlaff , p. ; fichtner , h. ; gieseler , j. ; gmez - herrero , r. ; heber , b. ; klassen , a. ; kleimann , j. ; kopp , a. ; potgieter , m. ; scherer , k. and strauss , r. d. , 2013 , arxiv.org , 1309.1344v1 kunow , h. ; drge , w. ; heber , b. ; mller - mellin , r. ; rhrs , k. ; sierks , h. ; wibberenz , g. ; ducros , r. ; ferrando , p. ; rastoin , c. ; raviart , a. and paizis , c. , 1995 , space science reviews , 72 , 397402 , r. , linker , j. a. , and miki , z. , 2009 , , 690 , 902912 marsden , r. g. , 2001 , astrophysics and space science , 277 , 337347 mccomas , d. j. ; bame , s. j. ; barker , p. ; feldman , w. c. ; phillips , j. l. ; riley , p. and griffee , j. w. , 1998 , space science reviews , 86 , 563612 mcgregor , s. l. , hughes , w. j. , arge , c. n. , owens , m. j. , & odstrcil , d. , 2011 , journal of geophysical research ( space physics ) , 116 , a03101 mller - mellin , r. ; kunow , h. ; fleiner , v. ; pehlke , e. ; rode , e. ; rschmann , n. ; scharmberg , c. ; sierks , h. ; rusznyak , p. ; mckenna - lawlor , s. ; elendt , i. ; sequeiros , j. ; meziat , d. ; sanchez , s. ; medina , j. ; del peral , l. ; witte , m. ; marsden , r. and henrion , j. , 1995 , solar physics , 162 , 483504 odstrcil , d. ; riley , p. ; zhao , x. p. , 2004, journal of geophysical research : space physics , 109 pahud , d. m. ; merkin , v. g. ; arge , c. n. ; hughes , w. j. ; mcgregor , s. m. 2012 , journal of atmospheric and solar - terrestrial physics , 83 , 3238 , e. n. , 1958 , , 128 , 664 , e. n. , 1963 , interplanetary dynamical processes , wiley - interscience , new york pizzo , v. j. , 1982 , journal of geophysical research , 87 , 4374 , m. s. , 2013a , living reviews in solar physics , 10 , 3 , m. s. , 2013b , space science reviews , 176 , 165176 press , w.h . ; teukolsky , s.a . ; vetterling , w.t . ; flannery , b.p ., 2007 , cambridge university press , isbn : 978 - 0 - 521 - 88068 - 8 richardson , i. g. , 2004 , space science reviews , 111 , 267376 riley , p. , linker , j. a. , & miki , z. 2001 , journal of geophysical research , 106 , 15889 riley , p. , linker , j. a. , miki , z. , lionello , r. , ledvina , s. a. , & luhmann , j. g. 2006 , , 653 , 1510 , p.,lionello , r. , linker , j. a. , miki , z. , luhmann , j. , and wijaya , j. , 2011 , solar physics , 274 , 361377 riley , p. ; linker , j. ; miki , z. , j. , 2013 , journal of geophysical research : space physics , 118 , 600607 schatten , k. h. , wilcox , j. m. , & ness , n. f. 1969 , solar physics , 6 , 442 schatten , k. h. 1971 , cosmic electrodynamics , 2 , 232 snodgrass , h. b. , & ulrich , r. k. , 1990 , , 351 , 309 , o. ; engelbrecht , n. e. ; burger , r. a. ; ferreira , s. e. s. ; fichtner , h. ; heber , b. ; kopp , a. ; potgieter , m. s. ; scherer , k. , 2011 , , 741 , 23-+ , r. d. and potgieter , m. s. and ferreira , s. e. s. , 2013 , advances in space research , 51 , 339 - 349 stevens , m. l. , linker , j. a. , riley , p. , & hughes , w. j. , 2012 , journal of atmospheric and solar - terrestrial physics , 83 , 22 tth , g. ; van der holst , b. ; huang , z. , 2011 , , 732 , 102 tran , t. 2009 , improving the predictions of solarwind speed and interplanetary magnetic field at the earth , phd thesis , ucla , a. v. , and goldstein , m. l. , 2003 , journal of geophysical research ( space physics ) , 108 , 1354 vogt , a. , 2013 , modellierung des einflusses korotierender wechselwirkungsregionen auf 7 mev - jupiterelektronen , master s thesis , cau wang , y .- m . , & sheeley , n. r. , jr . 1990 , , 355 , 726 weber , e. j. , and davis , l ., 1967 , , 148 , 217 wiengarten , t. , kleimann , j. , fichtner , h. , cameron , r. , jiang , j. , kissmann , r. , & scherer , k. 2013 , journal of geophysical research ( space physics ) , 118 , 29 zank , g. p. ; matthaeus , w. h. and smith , c. w. , 1996 , journal of geophysical research , 101 , 1709317107 , x. p. , and hoeksema , j. t. , 2010 , solar physics , 266 , 379390
|
the transport of energetic particles such as cosmic rays is governed by the properties of the plasma being traversed . while these properties are rather poorly known for galactic and interstellar plasmas due to the lack of in situ measurements , the heliospheric plasma environment has been probed by spacecraft for decades and provides a unique opportunity for testing transport theories . of particular interest for the 3d heliospheric transport of energetic particles are structures such as corotating interaction regions ( cirs ) , which , due to strongly enhanced magnetic field strengths , turbulence , and associated shocks , can act as diffusion barriers on the one hand , but also as accelerators of low energy crs on the other hand as well . in a two - fold series of papers we investigate these effects by modeling inner - heliospheric solar wind conditions with a numerical magnetohydrodynamic ( mhd ) setup ( this paper ) , which will serve as an input to a transport code employing a stochastic differential equation ( sde ) approach ( second paper ) . in this first paper we present results from 3d mhd simulations with our code cronos : for validation purposes we use analytic boundary conditions and compare with similar work by pizzo . for a more realistic modeling of solar wind conditions , boundary conditions derived from synoptic magnetograms via the wang - sheeley - arge ( wsa ) model are utilized , where the potential field modeling is performed with a finite - difference approach ( fdips ) in contrast to the traditional spherical harmonics expansion often utilized in the wsa model . our results are validated by comparing with multi - spacecraft data for ecliptical ( stereo - a / b ) and out - of - ecliptic ( ulysses ) regions .
|
atlas detector is one of the two general - purpose experiments currently under construction for the large hadron collider ( lhc ) at cern .lhc is a proton - proton collider with a 14-tev centre - of - mass energy and a design luminosity of . atlas consists of the inner detector ( i d ) , the electromagnetic and the hadronic calorimeters , and the muon spectrometer .the i d is a system designed for tracking , particle identification and vertex reconstruction , operating in a 2-t superconducting solenoid .the semiconductor tracker ( sct ) forms the middle layer of the i d between the pixel detector and the transition radiation detector . the sct system , depicted in fig .[ fig : sct ] , comprises a barrel made of four nested cylinders and two end - caps of nine disks each .the cylinders together carry 2112 detector units ( _ modules _ ) while 1976 end - cap modules are mounted on the disks in total .the whole sct occupies a cylinder of 5.6 m in length and 56 cm in radius with the innermost layer at a radius of 27 cm .it provides a pseudorapidity coverage of up to .the silicon modules consist of one or two pairs of single - sided p - in - n microstrip sensors glued back - to - back at a 40-mrad stereo angle to provide two - dimensional track reconstruction .the 285- thick sensors have 768 ac - coupled strips with an pitch for the barrel and a 57 pitch for the end - cap modules . between the sensor pairsthere is a highly thermally conductive baseboard .barrel modules follow one common design , while for the forward ones four different types exist based on their position in the detector .the readout of the module is based on 12 abcd3ta asics manufactured in the radiation - hard dmill process mounted on a copper / kapton hybrid .the abcd3ta chip features a 128-channel analog front end consisting of amplifiers and comparators and a digital readout circuit operating at a frequency of 40.08 mhz .this asic utilizes the binary scheme where the signals from the silicon detector are amplified , compared to a threshold and only the result of the comparison enters the input register and the digital pipeline .the clock and command signals as well as the data are transferred from and to the off - detector electronics through optical links .the i d volume will be subject to a fluence of charged and neutral particles from the collision point and from back - scattered neutrons from the calorimeters .an estimated fluence at the innermost part of the sct is 1-mev - neutrons ( or equivalently 24-gev - protons ) in ten years of operation .the sct has been designed to be able to withstand these fluences and its performance has been extensively studied in beam tests using irradiated sct modules .the lhc operating conditions demand challenging electrical performance specifications for the sct modules and the limitations mainly concern the accepted noise occupancy level , the tracking efficiency , the timing and the power consumption .the most important requirements the sct module needs to fulfil follow . the total effective noise of the modules results from two principal contributions ; the front - end electronics and the channel - to - channel threshold matching .the former is the equivalent noise charge ( enc ) for the front - end system including the silicon strip detector .it is specified to be less than 1500 enc before irradiation and 1800 enc after the full dose of 24-gev - equivalent - protons .the noise hit rate needs to be significantly less than the real hit occupancy to ensure that it does not affect the data transmission rate , the pattern recognition and the track reconstruction .the foreseen limit of per strip requires the discrimination level in the front - end electronics to be set to 3.3 times the noise charge . to achieve this condition at the atlas operating threshold of 1 fc, the total equivalent noise charge should never be greater than 1900 enc . assuming a 3.3-fc median signal at full depletion that corresponds to a median signal - to - noise ratio of 10:1 .in general the tracking performance of a particle detector depends on various parameters : the radial space available in the cavity , which limits the lever arm , the strength of the magnetic field , and the intrinsic precision and efficiency of the detector elements . to this respecta starting requirement is a low number of dead readout channels , specified to be less than 16 for each module to assure at least 99% of working channels .furthermore no more than eight consecutive faulty channels are accepted in a module . for a correct track reconstruction, every hit has to be associated to a specific bunch crossing .that is translated to a demand for a time - walk of less than 16 ns , where the time - walk is defined as the maximum time variation in the crossing of the comparator threshold at 1 fc over a signal range of 1.25 to 10 fc .the fraction of output signals shifted to the wrong beam crossing is required to be less than 1% .the nominal values for the power supplies of the asics are set as follows : * analogue power supply : . *digital power supply : .* detector - bias : high voltage of up to 500 v can be delivered by the asics .the nominal power consumption of a fully loaded module is 4.75 w during operation at 1 fc threshold with nominal occupancy ( 1% ) and 100 khz trigger rate ( l1 rate ) . including the optical readout ,the maximal power dissipation should be 7.0 w for the hybrid and the heat generated in the detectors just before thermal run - away should be 2.6 w for outer module wafers and 1.6 w for inner ones .the double pulse resolution directly affects the efficiency .it is required to be 50 ns to ensure less than 1% data loss at the highest design occupancy .standard daq system and electrical tests , described in the following sections , aim at verifying the hybrid and detector functionality after the module assembly and at demonstrating the module performance with respect to the required electrical specifications .in all the measurements performed , the asics are powered and read out electrically via the standard sct daq system which contains the following vme modules : * cloac ( clock and control ) : this module generates the clock , fast trigger and reset commands for the sct modules in the absence of the timing , trigger and control system . *slog ( slow command generator ) : it allows the generation of slow commands for the control and configuration of sct front - end chips for up to six modules .it fans out clock and fast commands from an external source ( cloac ) .alternatively an internal clock may be selected , allowing slog to generate clock and commands in stand - alone mode . *mustard ( multichannel semiconductor tracker abcd readout device ) : a unit designed to receive , store and decode the data from multiple sct module systems . up to 12 data streams ( six modules ) can be read out from one mustard . *scthv : a prototype high voltage unit providing detector bias to four modules . *sctlv : a custom - designed low voltage power supply for two silicon modules .the software package sctdaq has been developed for testing both the bare hybrids and the modules using the aforementioned vme units .it consists of a c++ dynamically linked library ( stdll ) and a set of root macros which analyze the raw data obtained in each test and stores the results in a database . a schematic diagram of the sctdaq is shown in fig .[ fig : sctdaq ] .every module is characterized to check the functionality and performance stability and to verify that the specifications are met . using the internal calibration circuit to inject charge of adjustable amplitude in the preamplifier of each channel, the front - end parameters such as gain , noise and channel - to - channel threshold spread are measured .the characterization sequence includes the following steps : * digital tests are executed to identify chip or hybrid damage .these include tests of the redundancy links , the chip by - pass functionality and the 128-cell pipeline circuit .* optimization of the delay between calibration signal and clock ( strobe delay ) is performed on a chip - to - chip basis . * to minimize the impact of the threshold non - uniformity across the channels on the noise occupancy , the abcd3ta design foresees the possibility to adjust the discriminator offset . a threshold correction using a digital - to - analog converter( trim dac ) per channel with four selectable ranges ( different for each chip ) has been implemented in the asics .the _ trimming _ procedure allows an improved matching of the comparators thresholds ; this is an important issue for the irradiated modules due to the increase of threshold spread with radiation dose . *the gain and electronic noise are obtained channel by channel with threshold scans performed for ten different values of injected charge ranging from 0.5 to 8 fc ( response curve procedure ; see fig . [ fig:3pgain ] ) . for each charge injected the corresponding value in mv is extracted as the 50% point ( ) of the threshold scan fitted with a complementary error function ( -curve ) .the gain , input noise and offset are deduced from the correlation of the voltage output in mv versus the injected charge in fc . * a threshold scan without any charge injectionis performed to yield a direct measurement of the noise occupancy at 1 fc , as shown in fig .[ fig : noiseocc ] .the adjusted discriminator offset is applied to ensure a uniform measurement across the channels . * a dedicated scanis also executed to determine the time - walk .setting the comparator threshold to 1 fc for each value of injected charge ranging from 1.25 to 10 fc a complementary error function is fitted to the falling edge of a plot of efficiency versus the setting of the strobe delay to determine the 50%-efficiency point .the time - walk is given by the difference between delays calculated for 1.25 fc and for 10 fc injected charge .as part of the quality assurance test , a long - term test with electrical readout is also performed .the asics are powered , clocked and triggered during at least 18 hours while the module bias voltage is kept at 150 v and its thermistor temperature is .the bias voltage , chip currents , hybrid temperature , the leakage current and the noise occupancy are recorded every 15 min , as shown in fig .[ fig : longterm ] .moreover , every two hours a test verifying correct functionality of the module is performed . a final measurement of the detector leakage current as a function of the bias voltage ( curve ) is also performed at to assure that the current drawn by the whole module is low enough for the safe operation of the detector .the current values at 150 and 350 v are recorded and compared with those of previous curve measurements before and after the module sub - assembly . during the electrical teststhe modules are mounted in a light - tight aluminum box which supports the modules at the two cooling blocks of the baseboard .the test box includes a cooling channel connected to a liquid coolant system of adjustable temperature .the operating temperature is monitored by thermistors ( one for the end - cap and two for the barrel hybrid ) mounted on the hybrid .the box also provides a connector for dry air circulation .subsequently , the module test box is placed inside an environmental chamber and it is electrically connected to the readout system and vme crate .up to six modules can be tested simultaneously with this configuration .the grounding and shielding scheme of the setup is of crucial importance , therefore a careful optimization is necessary . the tests are carried out at a detector bias of 150 v and at an operating temperature of .all production modules have to pass successfully the aforementioned tests long - term test , characterization and leakage current measurement as a part of their quality assurance plan .the hybrids are also tested before assembly using the same setup and software package .the results presented here correspond to the end - cap production modules that qualified for assembly onto disks , which amount to ( including spares ) representing about half of the total number of sct modules . in fig .[ fig : gain ] the average gain per module is shown for all qualified forward modules .the average gain value is about 57 mv / fc at a discriminator threshold of 2 fc and it is of the same level as the one obtained from system tests . the noise level per module is shown in fig . [ fig : enc ] .the two distinct contributions reflect the difference between _ short _ modules ( inner and short middle ) and _ long _ ones ( long middle and outer ) . the former consist on only one pair of sensors having a strip length of around 6 cm , while the latter have two detector pairs with a total length of 12 cm , resulting in higher strip resistance .an average of 1550 with an r.m.s . of about has been attained for the long modules .the noise occupancy at a comparator threshold of 1 fc is measured to be on average , i.e. twenty times lower than the requirement of per strip , as illustrated in fig .[ fig : no ] .these values are compatible with the ones acquired from non - irradiated prototype modules , which also showed that after irradiation the noise levels although higher do not compromise the overall detector performance .it should be stressed that the acquired noise measurements largely depend on the degree of the setup optimization which generally varies across the testing sites , resulting in a higher than actual measured value of the module noise .the noise also depends on the temperature on the hybrid increasing by per degree celsius . since under standard conditions at the lhcthe modules will operate with a thermistor temperature near , a lower noise level than the one obtained during quality control tests is expected during running . another aspect of the readout requirements is the number of defective channels per module . as shown in fig .[ fig : defects ] , on average less than three channels per module are _ lost _ , i.e. , have to be masked , which represents a fraction of .this category includes dead , stuck , noisy channels , as well as channels that have not been wire - bonded to the strips and channels that can not be trimmed .other channels exhibit less critical defects such as low or high gain ( or offset ) with respect to the chip - average .these _ faulty _ channels amount to less than two per module ( ) .their presence is due either to chip defects or defective detector strips ( e.g.punch-through or short - circuited channels ) .as far as the final curves are concerned , the full statistics results verify the good behavior of the sensors at a high bias voltage .the very few cases where a problem was observed was either due to detector damage after assembly or to a defective bias voltage connection on the hybrid . in the latter case the hybrids were reworked to re - establish the connection .to recapitulate , only a fraction of about 2.4% of the tested modules does not pass at least one electrical characterization test .most of these modules exhibit a high number of consecutive faulty channels due to minor damage ( scratch ) of module components such as the sensors or the fan - ins .the high yield of the electrical tests performed on the production modules reflects the strict quality control criteria set during the asics and the hybrids selection .the results of the systematic electrical tests performed in all sct forward production modules demonstrate that they are well within specifications . the attained gain and the noise performance are compatible with the ones obtained in several system tests involving detector and electronics prototypes .the fraction of defective channels per module is kept well below 1% .the production of the silicon modules has finished and their mounting onto large structures ( cylinders and disks ) is well under way .the whole sct is expected to be ready for installation in the atlas cavern at the lhc together with the transition radiation detector in spring 2006 .the author would like to thank carlos lacasta and joe foster for their help in retrieving the data presented here from the corresponding database and for useful comments during the preparation of this contribution .j. n. jackson [ atlas sct collaboration ] , nucl .instrum .a * 541 * ( 2005 ) 89 .c. lacasta , nucl .instrum .a * 512 * ( 2003 ) 157 .d. robinson _et al . _ ,instrum . meth .a * 485 * ( 2002 ) 84 . c. ketterer ,ieee trans .sci . * 51 * ( 2004 ) 1134 .w. dabrowski , nucl .instrum .a * 501 * ( 2003 ) 167 .i. mandic [ atlas sct collaboration ] , ieee trans .* 49 * ( 2002 ) 2888 ; + p. j. dervan [ atlas sct collaboration ] , nucl .instrum .a * 514 * ( 2003 ) 163 ; + p. k. teng _ et al . _ ,instrum .a * 497 * ( 2003 ) 294 ; + l. s. hou , p. k. teng , m. l. chu , s. c. lee and d. s. su , nucl .instrum .a * 539 * ( 2005 ) 105 .y. unno _ et al ._ , ieee trans . nucl .* 49 * ( 2002 ) 1868 ; + f. campabadal _ et al ._ , nucl .instrum .a * 538 * ( 2005 ) 384 . c. lacasta , `` electrical specifications and expected performance of the end - cap module , '' atlas project document , atl - is - en-0008 ( 2002 ) , https://edms.cern.ch/document/316205/1 c. lacasta , f. anghinolfi , j. kaplon , r. szczygiel , w. dabrowski , p. demierre and d. ferrere , `` production database for the atlas - sct front end asics , '' proc ._ 6th workshop on electronic for lhc experiments , cracow , poland , 11 - 15 sep 2000 _ [ cern-2000 - 010 ] ( 2000 ) . c. lacasta_ , `` electrical results from prototype modules , '' atlas project document , atl - is - tr-0001 ( 2002 ) , https://edms.cern.ch/document/316209/1 ; + p.w. phillips [ atlas sct collaboration ] , `` system performance of atlas sct detector modules , '' proc . _8th workshop on electronics for lhc experiments , colmar , france , 9 - 13 sep 2002 _ [ cern-2002 - 003 ] , p. 100104
|
the atlas semiconductor tracker ( sct ) together with the pixel and the transition radiation detectors will form the tracking system of the atlas experiment at lhc . it will consist of 20000 single - sided silicon microstrip sensors assembled back - to - back into modules mounted on four concentric barrels and two end - cap detectors formed by nine disks each . the sct module production and testing has finished while the macro - assembly is well under way . after an overview of the layout and the operating environment of the sct , a description of the readout electronics design and operation requirements will be given . the quality control procedure and the daq software for assuring the electrical functionality of hybrids and modules will be discussed . the focus will be on the electrical performance results obtained during the assembly and testing of the end - cap sct modules . atlas , data acquisition , quality control , silicon radiation detectors .
|
in most introductory physics courses , students are assigned required weekly problem sets that consist of five to ten problems ( or exercises ) from the textbook .while this structure provides students opportunities to practice applying key concepts to new situations , it does guarantee learning . by providing all students with the same list of practice problems, an instructor is not providing personalized practice .while it could be argued that the best scenario would be for the instructor to provide personalized practice for each student , this is often impractical and sends the message to students that they are dependent on an authority for their learning .while instructors are in the position to offer feedback and suggestions to students , the students need to assume some responsibility for their own learning .ideally , a student would employ metacognition and engage in self - regulated learning ( srl ) , which broadly describes a process by which a learner plans his/ her task , monitors the work and thinking during the task and makes adjustments based upon the data gathered .a quote , typically attributed to john dewey , best expresses this need for srl : `` we do nt learn from experience .we learn from reflecting on experience . '' to examine whether or not personalized learning and scaffolded self - regulation could impact student performance , a physics course utilized ideas from learning - controlled instruction ( lci ) .students were asked to select their own practice problems , with only a suggested list provided . to shift students practice and attitudes about learning , they were asked to also engage in some guided self - regulation via prompts provided in homework reports .each of these features is described below , as are some of the results , which show that very few students shifted their views or practices significantly .what the reports and associated data collection did do is paint a clearer picture of what students are doing in the out - of - class practice and provide a glimpse of the impact homework without reflection has on student learning .a section of an algebra - based mechanics course , which is typically taken by junior life science majors , was modified to included flexible out of class practice .the section began with 25 students , with one of them withdrawing prior to the end of the semester .the students incoming gpa , 3.38 ; pre - instruction fci score , 9.3 ; and scientific reasoning ( as measured by the classroom test of scientific reasoning ( ctsr ) ) , 75.2% , were statistically identical to prior sections. the course included many traditional components of an introductory physics course- a weekly two - hour lab and three 50-minute lecture sessions that included four in - class tests and final exam .homework comprised 20% of the students final grade and consisted of two main types . in the first, students did short , almost daily , `` warm - up '' assignments from the textbook s accompanying workbook .these activities were aimed at practicing fundamental skills in problem - solving and refining their understanding of physical models .students were given credit if they came to class having attempted the activities ; correctness was not a criteria . at the start of each class, the students would discuss their results and questions in groups .the class would then build on of the questions raised by students and any extensions or variations that the instructor added .the second type of homework was practice solving word problems , where students applied physics concepts and models to new , real - world scenarios .these problems came from the textbook as well as other resources , such as the university of minnesota context rich archive .rather than assign a single set of practice problems each week , students were provided with a list of 50 - 75 suggested problems in each of the four units .these were organized by learning outcome ( content focused ) and sorted by difficulty .this made it easier for students to locate the practice problems that were appropriate for them at that time .much like other previous learning controlled homework systems , this one allowed students to pick the problems that they wanted to practice .students were not required to attempt any set number or type of problems , they were free to choose their practice problems . rather than submitting the solutions ,students were required to submit reports in which they describe their planning , monitoring and adjusting . to collect information about what problems students were selecting and the degree to which they are engaging in self regulated learning ,students were asked to submit a report describing their practice and its effectiveness each time they worked on physics problems outside of class .the reports had two components- self evaluation of problem difficulty and scaffolded reflection questions .the questions were divided into two halves- one that was to be completed when they were beginning their practice and the other at the end .the first half included prompts that asked students to describe their goals for that particular practice session ( planning ) .[ fig1 ] ) the second half asked students to reflect on the effectiveness of their practice ( monitoring ) and articulate a plan for what their next practice will be ( adjusting ) .the report was not too cumbersome as it could be completed on a single hand - written page .complete sentences were nt required .because of a desire to have the students solve their homework problems with a think - aloud protocol , each student was given a smartpen that electronically captured audio and penstrokes .( http://www.livescribe.com ) the recordings made by the smartpens are essentially real - time movies of what was written and said .students used this same technology to capture the homework reports .students would write , or speak , out their responses , then once they synced the pen with their computer ( via usb ) , the reports would automatically be emailed to the instructor . at a minimum , one report needed to be submitted each week .if a student did not have the time to work on any practice within a given week , they could submit a report that described their plans for their future studying .as long as the adjusting section was completed , students were given credit for submitting a report , no matter its quality .reports were required in ten of the course s fifteen weeks . in any week, roughly 5/8 of the students submitted complete reports , 1/8 of the students submitted only the adjusting portion and 1/4 of the students failed to submit anything .all but two students consistently submitted only one report per week , usually in the 24-hour window before the deadline .some students admitted to doing unreported practice immediately before a test , but generally it seemed that students only sat down to do practice problems once a week . in each session , they would report an average of three practice problems , with the maximum number being fifteen .most of the report were superficial , with the most common description of students motivation for solving problems was to `` do practice . ''the purpose of doing the practice was the practice itself , not any specific learning objectives or shortcoming that they were looking to address . despite the guided prompts , and class discussion about learning , most students saw ( at least partially ) the practice as the goal rather than the process by which a goal is achieved .a few students did offer some more specific motivation : * _ after the exam i realized that even though i understood the theory behind problems , i need to work on challenging application word problems ._ * _ some of the examples in class lost me ; work on understanding signs better _ * _ be able to solve problems in a timely manner without outside help _ just as superficial as the motivation ( planning ) was , so was the reflection ( adjusting ) .most students simply listed `` do more practice '' as their next steps in studying physics .some did articulate more specific steps , which often involved seeking help from the instructor .the specificity or quality of the responses on the reports did not correlate with grades or problem - solving proficiency .neither was there a correlation between the number , or type , of practice problems reported and test grades .some students did admit to doing some unreported practice that could account for the lack of correlation .another explanation though could simply be that the practice is not having an impact on the students learning .if they are not engaging in some metacognitive thought , they simply may not be learning from the practice that they are doing . on an end - of - the - semester questionnaire ,nearly all of the students spoke very highly about being able to choose their own practice problems .* _ yes , i thought it was helpful because you could focus on certain areas that you needed to work on more ._ * _ yes , because it allowed me to focus on problems that i specifically needed help on .i could concentrate my focus to one area at a time if that is what i needed .it s nice to have homework tailored to each individual students needs . _* _ yes , i really did like this because it meant i did not have to waste my time on a bunch of level i or ii s if i could easily do them , and i could focus on the more challenging problems ._ the students understood the value in tailoring their practice to their own needs .what they said that they did not enjoy or benefit from were the homework reports .* _ i found the homework report somewhat unnecessary because i personally would have completed the suggested problems with or without the required reports ._ * _ they [ the reports ] took too long . _* _ i never really understood the use of the before and after questions ._ from these responses and their actual homework reports , it is very clear that most students did not engage in self - regulated learning .even though they liked the idea of being in control of their learning and selecting the practice problems , they did not take complete control of that process .isaac was perhaps the one student who showed and reported a noticeable amount of self - regulated learning , much of which he described as new for himself . from the beginning of the semester , he was somebody who was at risk for struggling in physics .entering the course , he had a gpa that was 0.3 lower than the class average . on the ctsr , he had a score that indicated a lack of formal operational thinking ( 63% ) .on the first three in - class tests , his scores were 18 - 24% below the class average . despite these difficulties ,isaac improved his performance by the fourth in - class test , scoring only 6% below the class average and professing a change in his habits and views of learning . throughout the semester , but especially between the third and fourth test , isaac described engaging in self - regulated learning .unlike his peers , he often described motivation for working on physics problems that was rooted in his past performance . * _ to better understand free fall motion because i have been struggling a bit _ * _ i hope to understand impulse/ momentum and how to better define my systems _ in the open - ended questionnaire at the end of the semester, isaac not only professed his appreciation for the learner - selected homework , he also commented on the usefulness of the weekly reports : the insight that he has into interpreting the monitoring portion of the report seems to indicate that he actually took the reports to heart and actively engaged in the process .the vast majority of students would usually say something to the effect of `` i have no questions '' in the monitoring portion .isaac was the only person who seemed to view the lack of questions as a problem . while not all students asked the instructor questions in their reports , enough did that it would seem that embarrassment was nt the main reason for a lack of stated questions .in addition to the improved test scores , isaac also showed improvement in his physics conceptual understanding that was better than expected .his fci normalized gain was much higher than students with similar scientific reasoning scores ( 0.70 vs 0.34 ) . perhaps because of his self - regulated learning , he exceeded expectations .the ability to engage in self - regulated learning is key to students success in a wide variety of courses as well as life after graduation .this autonomous learning can be viewed as one of the primary components to lifelong learning .once learners take an active role in planning , monitoring and adjusting their learning , they no longer are reliant on instructors .the skills and values behind this independent learning are core to a liberal education .self - regulation skills , and/ or related values , are ones that also might transfer to smaller scale tasks such as solving word problems . in these shorter , more focused tasks ,students also benefit highly from planning their solution before beginning the calculations , monitoring their thinking throughout so they may identify errors and adjusting any plans to correct errors . for at least one student , isaac , the flexible homework structure and reports appeared to have promoted self - regulated learning behaviors , which impacted his class performance and attitude . for the rest of the class , it was unclear if there was a significant impact . at the very least, the flexible homework assignments did not degrade the class performance on in - class tests and final exam . while the tests vary from year to year , the final exam is unchanged making a direct comparison possible .there was no statistically significant difference between the exam scores of this section and those of prior two years , in which there were required weekly problem sets . on the fci ,this section actually showed a statistically significant larger normalized gain than in prior two years ( 0.57 vs 0.45 , _< 0.05 ) . given the students performance with only doing a small number of practice problems out of class each week , instructors may wish to reconsider the size of their assigned problem sets .a learner controlled homework system appears to provide comparable learning that one that is completely instructor directed .also , moving away from collecting problem solutions , which did nt degrade student performance , could be a route for instructors who wish to avoid the plagiarism issues that plague many introductory courses .based on their reports , it seems that most students did not engage in self - regulated learning . without this reflection , out of class practice does not produce significant learning . given the superficial reflections included in the homework reports , it seems likely that most students do not carefully plan , monitor or adjust their practice . if this is the case , it is plausible that homework sets of any size , when not accompanied by reflection , wo nt have significant impact on their learning .more detailed studies of students homework habits are needed to fully understand the role that out of class practice plays in learning , but there is some evidence here that students are not benefiting from their practice . despite the scaffolded reflection and in - class discussions about metacognition , most students did not see the value self - regulated learning .the one student who did , demonstrated a higher than expected performance on the fci post - instruction test and the last in - class test . while the section s test average was comparable to prior ones , there likely was room for improvement if they had engaged in greater reflection .
|
students who successfully engage in self - regulated learning , are able to plan their own studying , monitoring their progress and make any necessary adjustments based upon the data and feedback they gather . in order to promote this type of independent learning , a recent introductory mechanics course was modified such that the homework and tests emphasized the planning , monitoring and adjusting of self - regulated learning . students were able to choose many of their own out - of - class learning activities . rather than collecting daily or weekly problem set solutions , assignments were mostly progress reports where students reported which activities they had attempted , self - assessment of their progress and plans for their next study session . tests included wrappers where students were asked to reflect on their mistakes and plans for improvement . while many students only engaged superficially the independent aspects of the course , some did demonstrate evidence of self - regulation . despite this lack of engagement , students performed as well as comparable student populations on course exam and better on the force concept inventory .
|
in this work we analyze the stochastically forced boussinesq equations for the velocity field , ( density - normalized ) pressure , and temperature of a viscous incompressible fluid .these equations take the form where the parameters are respectively the kinematic viscosity and thermal diffusivity of the fluid and with is the product of the gravitational constant and the thermal expansion coefficient .the spatial variable belongs to a two - dimensional torus .that is , we impose periodic boundary conditions in space .we consider a degenerate stochastic forcing , which acts only on a few fourier modes and exclusively through the temperature equation .we prove that there exists a unique statistically invariant state of the system . more precisely , we establish : [ thm : main : res ] with white noise acting only on the two largest standard modes of the temperature equation , the markov semigroup corresponding to possesses a unique ergodic invariant measure .moreover this measure is mixing , and it obeys a law of large numbers and a central limit theorem .the interaction between the nonlinear and stochastic terms in is delicate , and leads us to develop a novel infinite - dimensional form of the hrmander bracket condition . our analysis generalizes techniques developed in the recent works , and we believe it has broader interest for systems of spdes .going back to the early 1900 s rayleigh proposed the study of buoyancy driven fluid convection problems using the equations of boussinesq in order to explain the experimental work of bnard .today this system of equations plays a fundamental role in a wide variety of physical settings including climate and weather , the study of plate tectonics , and the internal dynamical structure of stars , see e.g. and references therein for further background. physically speaking , the system ( with ) arises as follows .consider a fluid with velocity confined between two horizontal plates , where one fixes the temperature of the fluid on the top , and bottom , with ( i.e. heating from below ) .it is typical to assume a linear relationship between density and temperature , and to impose the boussinesq approximation , which posits that the only significant role played by density variations in the fluid arise through the gravitational terms , so that the fluid velocity and temperature evolve according to . due to the presence of viscosity , the fluid is not moving at the plates and one assumes no - slip boundary conditions .the form of the boussinesq equations we consider in this work , that is supplemented with periodic boundary conditions , is sometimes referred to in the physics community as the ` homogeneous rayleigh - bnard ' or ` hrb ' system .it is derived as follows : one transforms the governing equations we have just described into an equivalent homogenous system by subtracting off a linear temperature profile .this introduces an additional excitation term in the temperature equation , and makes the temperature vanish at the plates . as a numerical simplification ,one then replaces these boundary conditions with periodic ones ( see ) .this periodic setting is controversial in the physics community since it can produce unbounded ( ` grow - up ' ) solutions , as has been observed both numerically and through explicit solutions , .we will consider in situations with _ no temperature differential _( , i.e. zero rayleigh number ) , so that the additional excitation term is not present and such unbounded solutions do not exist . extensions to more physically realistic boundary conditions for will be addressed in forthcoming works . in the mathematical communitythe deterministic boussinesq equations with various boundary conditions on bounded and unbounded domains have attracted considerable attention . in one line of work the 2d system has been interpreted as an analogue of 3d axisymmetric flow where ` vortex stretching ' terms appear in the ` vorticity formulation ' , see e.g. .other authors have sought to provide a rigorous mathematical framework for various physical and numerical observations in fluid convection problems , see e.g. .let us briefly motivate the stochastic forcing appearing in . due to sensitivity with respect to initial data and parameters , individual solutions of the basic equations of fluid mechanics are unpredictable and seemingly chaotic . however , some of their statistical properties of solutions are robust . as early as the 19th century j.v .boussinesq conjectured that turbulent flow can not be solely described by deterministic methods , and indicated that a stochastic framework should be used , see .more recently the study of the navier - stokes equations with degenerate white noise forcing has been proposed a proxy for the large - scale ` generic ' stirring which is assumed in the basic theories of turbulence ; this setting is ubiquitous in the turbulence literature , see e.g. and containing references .in this view , invariant measures of the stochastic equations of fluid dynamics would presumably contain the statistics posited by these theories .the closely related question of unique ergodicity and mixing provides rigorous justification for the explicit and implicit measurement assumptions invoked by physicists and engineers when measuring statistical properties of turbulent systems .the existence of invariant measures for forced - dissipative systems is often easy to prove with classical tools , namely by making use of the krylov - bogoliubov averaging procedure with energy ( compactness ) estimates , but the uniqueness of these measures is a deep and subtle issue . to establish this uniquenessone can follow the path laid out by the doob - khasminskii theorem , and more recently expanded upon in .this strategy requires the proof of certain smoothing properties of the associated markov semigroup , and to show that a common state can be reached by the dynamics regardless of initial conditions ( irreducibility ) . without stochastic forcing , solutions of our system converge to the trivial equilibrium , so that the proof of irreducibility is straightforward in our context .thus the main challenge of this work is to establish sufficient smoothing properties for the markov semigroup associated to . in order to discuss the difficulties in establishing smoothing properties for the markov semigroup it is useful to recall the canonical relationship between stochastic evolution equations and their corresponding fokker - planck ( kolmogorov ) equations . consider an abstract equation on a hilbert space , where , and is a ( finite or infinite ) collection of independent 1d brownian motions .we denote solutions with the initial condition at time by , and define the markov semigroup associated to according to , where is any ` observable ' .then solves the fokker - planck equation corresponding to given by + \langle f(u ) , d\psi \rangle , \quad \psi(0 ) : = { \phi}\label{eq : fokk : er : plank } \end{aligned}\ ] ] where we view as an element in .the interested reader should consult for more on the general theory of second order pdes posed on a hilbert space .there is a wide literature devoted to proving uniqueness and associated mixing properties of invariant measures for nonlinear stochastic pdes when is non - degenerate or mildly degenerate .see e.g. and references therein. roughly speaking ,the fewer the number of driving stochastic terms in , the more degenerate the diffusion in , and the more difficult it becomes to establish smoothing properties for the markov semigroup .moreover , while even the non - degenerate setting poses many interesting mathematical challenges , such stochastic forcing regimes are highly unsatisfactory from the point of view of turbulence where one typically assumes a clear separation between the forced and dissipative scales of motion . going back to the seminal work of hrmander ( and cf . ) , a theory of parabolic regularity for finite dimensional pdes of the general form with degenerate diffusion terms was developed .this theory of ` hypoellipticity ' can be interpreted in terms of finite - dimensional stochastic odes for which these degenerate parabolic pdes are the corresponding kolmogorov equations .this connection suggested the potential for a more probabilistic approach , initiating the development of the so - called malliavin calculus ; see and subsequent authors . in any case , the work of hrmander and malliavin has led to an extensive theory of unique ergodicity and mixing properties for finite - dimensional stochastic odes . by comparison , for stochastic pdes ( which correspond to the situation when is posed on an infinite dimensional space )this theory of hypoellipticity remains in its infancy .recently , however , in a series of groundbreaking works , a theory of unique ergodicity for degenerately forced infinite - dimensional stochastic systems has emerged .these works produced two fundamental contributions : firstly the authors demonstrated that to establish the uniqueness of the invariant measure it suffices to prove time asymptotic smoothing ( asymptotic strong feller property ) instead of ` instantaneous ' smoothing ( strong feller property ) .this is an abstract result from probability and it applies to very general settings including ours .secondly the authors generalize the methods of malliavin ( and subsequent authors ) in order to prove the asymptotic strong feller property for certain infinite - dimensional stochastic systems .these works resulted in an infinite - dimensional analogue of the hrmander bracket condition .for the second point the application of the methods in is more delicate and must be considered on a case - by - case basis ; it requires a careful analysis of the interaction of the nonlinear and stochastic terms of the system .in our situation the bracket condition in is not satisfied and needs to be replaced by a weaker notion .this required us to rework and generalize many basic elements of their approach .to explain our contributions at a more technical level , let us recall what is meant by a ` hrmander bracket condition ' in the context of systems of the general form .define and for take , [ e , \sigma_k ] , e : e \in \mathcal{v}_{m-1 } , k = 1 , \ldots , d\ } \, , \label{eq : finite : braks : span : set}\end{aligned}\ ] ] where is the drift term in , and for any frchet differentiable , (u ) : = \nabla e_2(u ) e_1(u ) - \nabla e_1(u ) e_2(u ) .\label{eq : lie : brak : abs}\end{aligned}\ ] ] this operation ] ( where are previously generated constant vector fields ) suffice to build quadratic forms satisfying .indeed , this approach has now been successfully employed for several important examples , including the 2d and 3d navier - stokes equations and the ginzburg - landau equations ; see . in these worksalgebraic conditions on the set of stochastically forced modes have been derived which guarantee that any finite - dimensional space can be generated from these types of brackets ; one obtains a collection of -independent elements which form an orthonormal basis for .this strategy has proven effective for certain scalar equations , but its limitations are evident in slightly more complicated situations .we believe that provides an illuminating case study of these difficulties which has lead us to generalize .observe that our model is distinguished by two key structural properties .firstly , the buoyancy term is the only means of spreading the effect of the stochastic forcing from the temperature equation , , to the momentum equations , . in particular note that this buoyancy term is linear , and therefore vanishes after two lie bracket operations with constant vector fields .secondly , the advective structure in leads to a delicate ` asymmetry ' in the nonlinear terms .for example this means that a more refined analysis is needed to address the spread of noise in the temperature equation alone .is the only second order term in the temperature equation . ]concretely we find that , by combining these observations , one obtains ,\sigma_{k_2 } ] = 0 ] for constant vector fields and which belong to the -component .remarkably , we found that this chain of admissible brackets leads to new constant vector fields exclusively in the -component of the phase space .this surprising observation requires a series of detailed computations but is perhaps anticipated by the advective structure of the nonlinear terms .it is in addressing the spread of noise in the momentum components of the phase space that the condition breaks down .forced directions in the temperature component are pushed to the momentum components through the buoyancy term . however , due to the presence of the nonlinearity , they are ` mixed ' with terms which have an unavoidable and complicated directional ( non - frequency - localized ) dependence on .crucially , due to the advective structure in , these ` error terms ' are concentrated only in the temperature component of the phase space .we are therefore able to push these error terms to small scales by using the ` pure ' directions already generated ( following the procedure described in the previous paragraph ) .more precisely , in the language of , , we are able to show that for every we can find and sets consisting of elements of the form , where the sequence ( which are essentially vectors consisting of trigonometric functions ) forms an orthonormal basis for the phase space and are functions taking values in with a complicated dependence on but which are supported on ` high ' frequencies , i.e. wavenumbers larger than .these structural observations for lead us to formulate the following generalization of .[ def : our : cond ] let and be hilbert spaces with compactly embedded in .we say that satisfies the _ generalized hrmander condition _ if for every and every there exist , , and a finite set ( where is defined according to ) such that where is independent of and , and , are defined in , respectively .below we demonstrate that the condition is sufficient to establish suitable time asymptotic smoothing properties for the markov semigroup associated to could be shown to hold in a more general setting with essentially the same analysis . ]this allows us to apply the abstract results from to complete the proof of theorem [ thm : main : res ] .the main technical challenge arising from the modified condition is that it requires us to significantly rework the spectral analysis of the malliavin matrix appearing in .the technically oriented reader can skip immediately to section [ sec : mal : spec : bnds ] for further details .the manuscript is organized as follows : in section [ sec : math : setting ] we restate our problem in an abstract functional setting and introduce some general definitions and notations. then we reduce the question of uniqueness of the invariant measure to establishing a time asymptotic gradient estimate on the markov semigroup .next , in section [ sec : grad : control ] we explain how , using the machinery of malliavin calculus , this gradient bound reduces to a control problem for a linearization of . in turn we show that this control problem may be solved by establishing appropriate spectral bounds for the malliavin covariance matrix .section [ sec : mal : spec : bnds ] is devoted to proving that our new form of the hrmander condition , , implies these spectral bounds , modulo some technical estimates postponed for section [ sec : braket : est : mal : mat ] .section [ sec : hormander : brak ] provides detailed lie bracket computations leading to the modified condition .finally in section [ sec : mixing ] we establish mixing properties , a law of large numbers and a central limit theorem for the invariant measure by making careful use of recent abstract results from .appendices [ sec : moment : est ] and [ subsect : malliavin ] collect respectively statistical moment bounds for ( and associated linearizations ) and a brief review of some elements of the malliavin calculus used in our analysis .in this section we formulate as an abstract evolution equation on a hilbert space and define its associated markovian framework .then we formulate our main result in theorem [ thm : mainresult ] and give an outline of the proof , which sets the agenda for the work below . in the rest of the paper , we consider in the equivalent , vorticity formulation .namely , if we denote , then by a standard calculation we obtain the system is posed on , where is the square torus ^ 2 = { \mathbb{r}}^2/(2\pi { \mathbb{z}}^2) ] which allows us to generate new pure directions in both the temperature and vorticity components of the phase space ; compare with figure [ fig : brak:1 ] in section [ sec : hormander : brak ] and the discussion in section [ sec : hor : int ] .] we remark that the case when the forcing is non - degenerate ( nontrivial on all fourier modes ) in both the momentum and temperature equations , was addressed in via coupling methods closely following the approach in .see also .* [ rem : forcing ] we emphasize that , when the random perturbation acts in the temperature equation only , this leads to a different geometric criteria for the noise structure compared to . to see this difference at a heuristic level we write in the fourier representation : observe that at first only fourier modes of in are excited . then , through the buoyancy term on the right hand side of , the fourier modes of in become excited .this is a purely formal argument as at the same time many modes in become excited .if all elements of have the same norm , such an excitation is not sufficient for the nonlinearity in , acting on its own , to propagate the noise to higher fourier modes .however , excitation in the fourier modes of in propagates to higher fourier modes in via the nonlinearity of ; here the norm restriction is clearly absent .thus , there is an additional mixing mechanism in the boussinesq system compared to the navier - stokes equation . *the class of functions for which the mixing condition holds is slightly restrictive .while it does allow for observables like individual fourier coefficients of the solution or the total energy of solutions , a further analysis is required to extend to s that involve pointwise spatial observations of the flow , for example ` structure functions ' .we leave these questions for future work . following ( and cf . ) , we explain how the proof of theorem [ thm : mainresult ] can be essentially reduced to establishing a time asymptotic gradient estimate , , on the markov semigroup . by the krylov - bogoliubov averaging method it is immediate to prove the existence of an invariant measure in the present setting .indeed , fix any , and define the probability measures as from and it follows that thus for any the set is compact in and by the markov inequality and therefore is tight , and hence weakly compact . making use of the feller property it then follows that any weak limit of this sequence is an invariant measure of . for is a compact , convex set .since the extremal points of are ergodic invariant measures for , we therefore infer the existence of an ergodic invariant measure for .this also shows that if the invariant measure is unique , it is necessarily ergodic .see e.g. for further details . ]we now turn to the question of uniqueness which in contrast to existence is highly non - trivial .the classical theoretical foundation to our approach is the doob - khasminskii theorem , see .while this approach has been fruitful for a stochastic perturbations acting on all of the fourier modes ( see e.g. ) ; it requires an instantaneous ( or at least finite time ) smoothing of , known as the _ strong feller property_. is said to be strong feller for some if .] this property is not expected to hold in the current hypo - elliptic setting . in recent works it has been shown that the strong feller property can be replaced by a much weaker notion .the following theorem from is is the starting point for all of the work that follows below .[ thm : mh : asf : wi ]let be a feller markov semigroup on a hilbert space and assume that the set of invariant measures of is compact .suppose that * the semigroup is _ weakly irreducible _ namely there exists such that for every and every which is invariant under .in other words there exists a point common to the support of every invariant measure .* there exists a non - decreasing sequence and a sequence , with such that for every and where the constant may depend on ( but not on ) . then the collection of invariant measures contains at most one element . in our situation the proof of ( i ) is more or less standard and follows precisely as in .the main difficulty is to establish the asymptotic smoothing property ( ii ) of theorem [ thm : mh : asf : wi ] .we prove the following stronger version of , which is also useful for other parts of theorem [ thm : mainresult ] the proof of the mixing and pathwise convergence properties , .we recall the convention in remark [ rmk : constants : conven ] .[ prop : grad : est : ms ] for every and every , the markov semigroup defined by satisfies the estimate for every and , where is independent of and . with proposition [ prop :grad : est : ms ] the uniqueness of the invariant measure follows immediately from theorem [ thm : mh : asf : wi ] . with slightly more work we can also use to establish the attraction properties ( i)(iii ) in theorem [ thm : mainresult ] . since this mainly requires the introduction of some further abstract machinery from we postpone the rest of the proof of theorem [ thm : mainresult ] to final part of section [ sec : mixing ] .in this section we explain how the estimate on in can be translated to a control problem through the malliavin integration by parts formula .this lead us to study the so called malliavin covariance matrix , which links the existence of a desirable control to the properties of successive hrmander - type lie brackets of vector fields ( on ) associated to .suitable spectral bounds for are given in proposition [ prop : mal : cor:1 ] and we conclude this section by explaining how these bounds can be used in conjunction with a control built around to complete the proof of proposition [ prop : grad : est : ms ] . the involved proof of proposition [ prop : mal : cor:1 ] is delayed for sections [ sec : mal : spec : bnds ] , [ sec : hormander : brak ] , [ sec : braket : est : mal : mat ] below . as we already noted in the introduction , although the statement of proposition [ prop : mal : cor:1 ] looks similar to corresponding results in , the proof is significantly different due to the particular nonlinear structure of . as such proposition [ prop : mal : cor:1 ] constitutes the main mathematical novelty of this work .let be the solution of and assume the convention from remark [ rmk : constants : conven ] where we let .then for any , we have see proposition [ prop : wellposed ] and ( * ? ? ?* section 3.3 ) ] where for , denotes the unique solution of and .the crucial step in establishing is to ` approximately remove ' the gradient from in .as such we seek to ( approximately ) identify with a malliavin derivative of some suitable random process and integrate by parts , in the malliavin sense . in appendix[ subsect : malliavin ] we recall some elements of this calculus which are used throughout this section . for an extended treatment of the malliavin theory we refer to e.g. .recall , that in our situation the malliavin derivative , satisfies , { \mathbb{r}}^d ) } = \lim_{\epsilon \to 0 } \frac{1}{\epsilon } \left ( u\bigl(t , u_0 , w + \epsilon \smallint _ 0^{\cdot } v ds\bigr ) - u(t , u_0 , w ) \right).\end{aligned}\ ] ] we may infer that for ; { \mathbb{r}}^{d})) ] by notice that , by the duhamel formula , for any the function satisfies with these preliminaries we now continue the computation started in . using the malliavin chain rule and integration by parts formula , as recalled in , , we infer that for any and any suitable ( skorokhod integrable ) , { \mathbb{r}}^{d } ) ] be the adjoint of defined in .we observe that , \label{eq : a : star : op : def}\end{aligned}\ ] ] where is the adjoint of defined in . here , for , is the solution of the ` backward ' system ( see ) we then define the malliavin matrix we now build the control and derive the associated in using the following iterative construction .denote by the control restricted to the time interval ] .if we denote then using we determine according to having defined the control , and the associated error , by and respectively , we now state and prove ( modulo a spectral bound on , proposition [ prop : mal : cor:1 ] ) the key decay estimate on .this estimate is used in section [ sec : proof : main : prop ] to complete the proof of proposition [ prop : grad : est : ms ] .[ lem : rho : bound ] for any , there exists which determines in so that for every even , moreover , we have the block adapted structure where and is the smallest integer greater than or equal to .[ rmk : prelim : lem : rho : bound ] we choose the exponent 8 as it is sufficient for the estimates on ; similar estimates are valid for any power greater or equal to two . to prove lemma [ lem : rho : bound ], we show that the control , when it is active , is effective in pushing energy into small scales where it is dissipated by diffusion . to make this precise recall the definition of , , and from and .fix specified below and for split , defining while for large estimates for essentially make use of the parabolic character of , establishing suitable bounds on requires a detailed understanding of the operator .we need to show , for sufficiently small , that indeed pushes energy into small scales , that is , is small .the following lemma from ( * ? ? ?* lemma 5.14 ) , shows that this in turn follows from uniform positivity of on a cone around .[ lem : good : t : reg : as ] suppose that is a positive , self - adjoint linear operator on a separable hilbert space .suppose that for some and we have that where then , for any , the next proposition shows that holds true for on a large subset of the probability space .[ prop : mal : cor:1 ] let be as in , relative to solving . for any , ] is a non - negative , decreasing function with as .we emphasize that is independent of .proposition [ prop : mal : cor:1 ] is a direct consequence of the markov property and theorem [ thm : mind : blowing : conclusion ] below . while similar results have appeared in previous works , the proof required us to develop a novel approach due to the particular nonlinear structure in .we are now prepared to prove lemma [ lem : rho : bound ] .we use the splitting from , .the constant appearing in the definition of this splitting is fixed in the estimate on which we address first . by the positive definiteness of , it follows that almost surely , for any .then , as is -measurable , from , , and we infer for appropriate .fix such an in , .note that the bound holds independently of the value of appearing in .next , we estimate . by , we infer where . for fixed by and for any , and consider the set where is defined in .using lemma [ lem : good : t : reg : as ] , proposition [ prop : mal : cor:1 ] with replaced by , and we have which holds for any ] ; { \mathbb{r}}^{d } ) } \leq \|v_{2n , 2n+1}\|_{l^2([2n,2n+1 ] ; { \mathbb{r}}^{d } ) } \leq \beta^{-1/2 } \|j_{2n , 2n+1}\| \|\rho_{2n}\| \ , , \label{eq : v : norm : obs}\end{aligned}\ ] ] and consequently for any ; { \mathbb{r}}^{d } ) , h)}\|v_{2n , 2n+1}\|_{l^2([2n , 2n+1 ] ; { \mathbb{r}}^{d } ) } \notag \\ & \leq c\beta^{-1/2 } \left(1 + \sup_{s \in [ 2n , t ] } \|{\mathcal{j}}_{s , t}\|^2 \right)\|\rho_{2n}\|,\end{aligned}\ ] ] and for any ] for any , and we may thus use the generalized it isometry , . by , is measurable , where is defined in , and consequently by , if .thus for the first term in , we make use of for a constant independent of . to bound the second term in , we compute an explicit expression for . by lemma [ lem : mal : est ] , each of , , , , and are differentiable in the malliavin sense and lie in the space for any ( see ) .it thus follows from that for any and any .moreover , recalling that by , is measurable , implies that for any . then by the malliavin product rule ( see e.g. ( * ? ? ?* lemma 3.6 ) ) we compute for any and ] ;{\mathbb{r}}^{d } ) } \notag\\ \leq & \beta^{-1/2 } \|{\mathfrak{d}}_s^j { \mathcal{j}}_{2n,2n+1}\| \|\rho_{2n}\| + \beta^{-1 } \|{\mathfrak{d}}_s^j { \mathcal{a}}_{2n,2n+1}\|_{\mathcal{l}(l^2([2n , 2n+1],{\mathbb{r}}^d ) , h ) } \|{\mathcal{j}}_{2n,2n+1}\|\|\rho_{2n}\| \notag\\ & + 2 \beta^{-1 } \|{\mathfrak{d}}_s^j { \mathcal{a}}_{2n,2n+1}^\ast\|_{\mathcal{l}(h , l^2([2n , 2n+1],{\mathbb{r}}^d ) ) } \|{\mathcal{j}}_{2n,2n+1}\|\|\rho_{2n}\| \ , .\label{eq : mid : step}\end{aligned}\ ] ] finally , we use , , , and to conclude ;{\mathbb{r}}^{d } ) } ds \\ & \leq c \beta^{-2 } \exp(\eta/2\|u_0\|^2 ) \sum_{n = 0}^{\infty } [ { \mathbb{e}}\|\rho_{2n}\|^4]^{1/2 } \notag\\ & \leq c \beta^{-2 } \exp(\eta \|u_0\|^2 ) \ , ,\label{eq : cost : bbd : ito : corr}\end{aligned}\ ] ] where . combining , withwe now infer , completing the proof of proposition [ prop : grad : est : ms ] .in this section we present the main technical result of this work , theorem [ thm : mind : blowing : conclusion ] , which yields a probabilistic spectral bound on the malliavin matrix ( see ) . recall that in section [ sec : math : setting ] we established the uniqueness of the invariant measure associated to assuming a gradient estimate on the markov semigroup , .then , in section [ sec : grad : control ] , we established this gradient estimate modulo proposition [ prop : mal : cor:1 ] , which is a corollary to theorem [ thm : mind : blowing : conclusion ] .hence , we have reduced the proof of the uniqueness in theorem [ thm : mainresult ] to the proof of theorem [ thm : mind : blowing : conclusion ] .[ thm : mind : blowing : conclusion ] let and define according to , relative to solving . fix any ] .we show in section [ sec : hormander : brak ] , that for each there exists such that the set is contained in . here , recall that and are basis elements for defined in , above .the elements are dependent ` error ' terms , which reside in ( see ) and satisfy the bound ; the explicit form for these terms is given in , below .the upshot is that for any ( finite ) we are only able to identify -dependent subsets of .hence , we need to introduce a new form of the hrmander described above in ( in more general terms ) satisfied by for any .we are ready to state the lower bound on .[ prop : large : conclusion ] fix any any integers and define by .then , for any and any ] are defined as in . using elementary algebra , we obtain that if and are not parallel and . under appropriate algebraic assumptions on the set of directly forced modes ,one therefore obtains that for any , for large enough . as we already noted in the introduction ,this strategy of repeated brackets with constant vector field to generate exactly has been used in all of the previously known examples ; see .our situation is completely different . since the random perturbation appears only in the temperature equation in , we immediately see , recalling the notation , that for any , , \sigma_{k'}^{m ' } ] = b(\sigma_k^m , \sigma_{k'}^{m ' } ) + b(\sigma_{k'}^{m ' } , \sigma_{k}^{m } ) = 0 \,,\ ] ] and therefore no new modes are generated .the observation in suggests that we need to make more carefully use of the interaction between the buoyancy term and the advective structure in .the strategy which we devised to generate suitable directions is summarized in figure [ fig : brak:1 ] below .strikingly , a bracket , f(u ) ] , \sigma_{k'}^{l ' } ] = c_1b(\psi_k^{l+1 } , \sigma_{k'}^{l ' } ) + c_2 b(\psi_{k'}^{l'+1 } , \sigma_{k}^{l}) ] we have the antisymmetry and jacobi identities = -[e_2 , e_1 ] , \quad [ [ e_1 , e_2 ] , e_3 ] + [ [ e_2 , e_3 ] , e_1 ] + [ [ e_3 , e_1 ] , e_2 ] = 0 , \label{eq : jacobi : and : his : many : colored : identity}\end{aligned}\ ] ] valid for any . from and we have that for any note also that in what follows the superscripts appearing in the basis elements , are understood modulo 2 , for example by we mean . for any we define .[ lem : b : psi : sigma ] for any and \ , , \\b(\psi_j^{m } , \psi_k^{m ' } ) = & \frac{(-1)^{1+m m'}}{2}\frac{(j^\perp \cdot k ) } { |j|^{2 } } \left [ \psi^{m+m'}_{j+k } + ( -1)^{m'+1 } \psi^{m+m'}_{j - k } \right]\,.\end{aligned}\ ] ] using lemma [ lem : b : psi : sigma ] and we have = & g(-1)^{(m+1)(m'+1 ) } \frac{(j^\perp \cdotk ) } { 2 } \left[(-1)^{m ' } b(j , k)\sigma_{j - k}^{m+m'+1 } - a(j , k)\sigma_{j+k}^{m+m'+1 } \right ] \ , , \label{eq : z : sig : form}\end{aligned}\ ] ] where from these relations , the following proposition follows easily .[ prop : generatesigma ] let , be as in .then , with given by , -[z_j^1(u ) , \sigma_k^0 ] , \label{eq : new : sig : dir:1}\\ g ( j^\perp\cdot k ) a(j , k ) \sigma_{j+k}^1 & = [ z_j^0(u ) , \sigma_k^0 ] - [ z_j^1(u ) , \sigma_k^1 ] , \label{eq : new : sig : dir:2}\\ g ( j^\perp\cdot k ) b(j , k ) \sigma_{j - k}^0 & = [ z_j^1(u ) , \sigma_k^0 ] - [ z_j^0(u ) , \sigma_k^1 ] , \label{eq : new : sig : dir:3}\\ g ( j^\perp\cdot k ) b(j , k ) \sigma_{j - k}^1 & = - [ z_j^1(u ) , \sigma_k^1 ] - [ z_j^0(u ) , \sigma_k^0 ] .\label{eq : new : sig : dir:4}\end{aligned}\ ] ] [ rm : spanningsigma ] the diagram in figure [ fig : new : sig : directions ] and an induction argument detailed in section [ sec : it : proof : by : con ] and illustrated in figure [ fig : span : arg:1 ] show that starting with the forced directions for each , it is possible to reach for any and .if we replaced the vectors in the definition of by other elements in , figure [ fig : new : sig : directions ] would change , namely the segments parallel to axes would be changed to segments parallel to the new directions in . in this casemore a complicated algebraic condition as in is needed to demonstrate that generates a spanning set for .directions are different and they include an error term with a component in the -direction . first note that can be rewritten as where .note that , by , we see that is concentrated only in its component. since we can generate , by , we can also generate ( with an error term ) , whenever .this constitutes the first downward branch in the lower portion of figure [ fig : brak:1 ] .using we are able to reach basis function that are not accessible by the brackets leading to . note that this second case is represented graphically the last lower branch of figure [ fig : brak:1 ] .[ prop : generatepsi ] let , , and then - [ z_{\ell'}^{1}(u),y_{\vec{e}_{1}}^1(u ) ] + h^{0,0}_{\ell',\vec{e}_{1},}(u ) + h^{1,1}_{\ell',\vec{e}_{1},}(u),\end{aligned}\ ] ] and - [ z_{\ell'}^{0}(u),y_{\vec{e}_{1}}^1(u ) ] + h^{0,1}_{\ell',\vec{e}_{1},}(u ) - h^{1,0}_{\ell',\vec{e}_{1}}(u).\end{aligned}\ ] ] from lemma [ lem : b : psi : sigma ] and the fact ( see ) one has combining , , and lemma [ lem : b : psi : sigma ] , we have & = ( -1)^{mm'+1 } \frac{g^2 \ell_2}{2}\big[\frac{2 + \ell_2 ^ 2}{1 + \ell_2 ^ 2}\psi_{\ell'+e_1}^{m+m ' } + \frac{\ell_2 ^ 2}{1 + \ell_2 ^ 2}(-1)^{m'}\psi_{\ell'-e_1}^{m+m'}\big ] + h_{\ell',e_1}^{m , m'}(u ) .\label{eq : new : de : bad : tails:2}\end{aligned}\ ] ] the proposition follows after eliminating the term involving by considering first the cases , to determine and , to determine . combining and proposition [ prop : generatepsi ]we now define the error term in the -directions by combining , proposition [ prop : generatepsi ] , and , we have for each , , - [ z_{j+\vec{e_1}}^{1}(u),y_{\vec{e}_{1}}^1(u ) ] ) & \textrm { if } j_1 = 0 , m = 0,\\ \dfrac{1 + |j|^2}{g^2 |j|^3 } ( [ z_{j+\vec{e_1}}^{1}(u),y_{\vec{e}_{1}}^0(u ) ] - [ z_{j+\vec{e_1}}^{0}(u),y_{\vec{e}_{1}}^1(u ) ] ) & \textrm { if } j_1 = 0 , m = 1 \ , . \end{cases } \label{eq : junk : express:2}\end{aligned}\ ] ] we next summarize some basic properties of in the following lemmata . [lem : junk : est : basic ] fix any with , , and any .then , ( the component of is zero ) and where the constant is independent of .moreover is linear , that is , is affine .the proof is a straightforward consequence of definitions .for example by careful inspection we have an estimate of any .lemma [ lem : junk : est : basic ] does not provide us with sufficient estimate for as it grows both in and .however , we crucially use the fact as follows .we can generate sufficiently many , and consequently subtract from , pure modes .hence , we generate all modes with but for approximation of we use only those with , the rest we use for controlling the size of the error ( for details see the proof of lemma [ lem : junk : e ] ) . to this endwe derive estimates for projections of into high fourier modes . recall that is the orthogonal projection on complement of and denote [ lem : junk : est ] for every integers , with , and every integer and where is independent of and . since the eigenvalue , cf . , one has by the generalized poincar inequality that by careful inspection of , noting that is affine in , we obtain where .the exact dependence of the right hand side on can be inferred from the fact that each derivative of can produce at most one factor of .this section is devoted to the proof of proposition [ prop : small : conclusion ] . to establish this upper bound " on the quadratic forms defined in section [ sec : mal : spec : bnds ] , recall that in section [ sec : hormander : brak ] we showed that ( see , ) for sufficiently large .we thus to translate each of the lie bracket computations in section [ sec : hormander : brak ] into quantitative bounds . roughly speaking , we would like to show that and that , starting from any admissible vector field , cf .\rangle , \langle \phi , [ e , f]\rangle \textrm { are ` small ' for all } k \in { \mathcal{z } } , l \in \{0,1\}. \label{eq : rough : quant : bbd : stat2}\end{aligned}\ ] ] to achieve , we broadly follow an approach recently developed in .. see and also e.g. for further details . ]notice that and define over test functions and admissible vector fields . to address the first case in we make use of a change of variable .expanding in this new variable we obtain a wiener polynomial with coefficient similar to ] , at least up to a change of variable .we then make use of the fact that we can bound the maximum of in terms of , for example , and norms of to deduce the desired implication .observe that our quadratic forms depend on and thus have a strong probabilistic dependence . indeedthe existence of these ` error ' terms in means that we have to carefully track the growth of constants as a function of the number of lie brackets we take .we also need to explain , at a quantitative level , how we are able to push error terms to entirely to high wavenumbers .neither of these concerns can be addressed from an ` obvious inspection ' of the methods in .in addition to these mathematical concerns , we have developed several lemmata [ lem : suit : reg ] , [ lem : aux : aux ] which we believe streamline the presentation of some of the arguments in comparison to previous works .the rest of the section is organized as follows : we begin with some generalities introducing or recalling some general lemmata that will be used repeatedly in the course of arguments leading to the rigorous form of . in subsection [ sec : implications : eigenvals ] we present the series of lemmas [ lem : zeroth : step][lem : de : to : de : creation : step ] each of which corresponds to one ( or more ) of the lie brackets computed in section [ sec : hormander : brak ] .as we proceed we refer to figure [ fig : brak:2 ] to help guide the reader through some admittedly involved computations . in subsection[ sec : it : proof : by : con ] we piece together the proved implications in an inductive argument to complete the proof of proposition [ prop : small : conclusion ] in all that follows we maintain the convention from remark [ rmk : constants : conven ] , that is , all constants are implicitly dependent on given parameters of the problem .note also that we carry out our arguments on a general time interval ] define the semi - norms ,h^{\beta } ) } : = \sup_{\substack{t_1 \neq t_2\\ t_1,t_2\in[a , b ] } } \frac{\| u(t_1)- u(t_2)\|_{h^{\beta}}}{|t_1-t_2|^{\alpha}}\,.\end{aligned}\ ] ] if and we will write instead of ,h^{\beta})} ] .similar notations will be employed for the hlder spaces ) ] etc . recalling the notation in we will define the ` generalized lie bracket ' : = \nabla e_2(\tilde{u } )e_1(u ) - \nabla e_1(u ) e_2(\tilde{u}),\end{aligned}\ ] ] for all suitably regular and . belowwe often consider which satisfies the shifted equation ( cf . ) note that , in contrast to , is in time for any .we next prove two auxiliary lemmata which encapsulates the process of obtaining ] , }\|[e(\bar{u}),f(u)]\|_{h^2}^{2p}\big)^{1/2 } + \big({\mathbb{e}}\|[e(\bar{u}),f(u)]\|_{c^{\alpha } h}^{2p}\big)^{1/2}\big]\ , , \label{eq : suit : reg2}\end{aligned}\ ] ] with . since solves and satisfies we have \rangle .\label{eq : gen : der}\end{aligned}\ ] ] now , immediately follows from hlder inequality and . to prove, we use that for any , , and any suitably regular , one has } } \left| \frac{\langle a(t ) , b(t ) \rangle - \langle a(s ) , b(s ) \rangle}{|s - t|^{\alpha } } \right| = \ ! \ ! \ ! \sup_{\substack{t \neqs\\ s , t \in [ t/2 , t ] } } \left| \frac{\langle a(t ) - a(s ) , b(t ) \rangle + \langle a(s ) , b(t ) - b(s ) \rangle}{|s - t|^{\alpha } } \right| \notag \\ & \leq \|a\|_{l^\infty h^{-s } } \|b \|_{c^{\alpha}h^{s } } + \|a\|_{c^{\alpha}h^{-s ' } } \|b\|_{l^\infty h^{s'}}. \label{eq : holder : space : time : algebra : props } \end{aligned}\ ] ] combining with and using hlder s inequality , }\|{\mathcal{k}}_{t , t}\phi\|^{2p}\big)\big)^{1/2 } \big({\mathbb{e}}\big(\|[e(\bar{u}),f(u)]\|_{c^{\alpha}h}^{2p}\big)\big)^{1/2 } \notag \\ & \qquad + c \big({\mathbb{e}}\big(\|{\mathcal{k}}_{t , t}\phi\|_{c^{\alpha } h^{-2}}^{2p}\big)\big)^{1/2 } \big({\mathbb{e}}\big(\sup_{t\in[t/2,t]}\|[e(\bar{u}),f(u)]\|_{h^2}^{2p}\big)\big)^{1/2 } \notag\end{aligned}\ ] ] and follows from and .[ lem : aux : aux ] fix , ] and indexed by .define , for each , } |{g_{\phi}}(t)| \leq \epsilon \textrm { and } \sup_{t \in [ t/2 , t ] } |{g_{\phi}}'(t)| >\epsilon^{\frac{\alpha}{2(1 + \alpha ) } } \right\}. \end{aligned}\ ] ] then , there is such that for each )}^{2/\alpha}\right)\ , .\label{eq : hm : wrapper : thm : conclusion}\end{aligned}\ ] ] as observed in ( * ? ? ?* lemma 6.14 ) we have the elementary bound which is valid for any ) ] is an arbitrary stochastic process .then , for all and , there exists a measurable set with such that on and for every } |f(t)| < \epsilon^{\beta } \quad \rightarrow \quad \begin{cases } \textrm { either } & \sup\limits_{|\alpha| \leq m } \sup\limits _ { t \in [ 0,t ] } |a_{\alpha}(t)| \leq \epsilon^{\beta 3^{-m}},\\ \textrm { or } & \sup\limits_{|\alpha| \leq m } \sup\limits_{s \not = t \in [ 0,t ] } \frac { |a_{\alpha}(t ) - a_{\alpha}(s)|}{|t -s|}\geq\epsilon^{-\beta 3^{-(m+1)}}. \end{cases}\end{aligned}\ ] ] we now start proving the implications depicted in figure [ fig : brak:2 ] .note that throughout what follows we fix a small constant which gives the range of values for which lemmas [ lem : zeroth : step][lem : de : to : de : creation : step ] hold .the first lemma explains how a lower bounds bound on the eigenvalues of initiates the iteration .[ lem : zeroth : step ] for every and every there exist a set and with such that on the set } | \langle { \mathcal{k}}_{t , t } \phi , \sigma_{k}^{l}\rangle | \leq \epsilon^{1/8 } \|\phi\|,\end{aligned}\ ] ] for each , , and every .we recall that is the set of directly forced modes as in and the elements are given by . for any with , define see .note that let , where is as in with . then by lemma[ lem : aux : aux ] with , , and one has \\\|\phi\| = 1 } } \left| \langle \sigma_{k}^{l } , { \mathcal{k}}_{t , t } \phi \rangle \langle \sigma_{k}^{l } , { \partial}_{t}{\mathcal{k}}_{t , t } \phi \rangle \right|^{2 } \right ) \leq c \exp ( \eta \|u_0\|^2 ) \epsilon\end{aligned}\ ] ] for any , where .finally , on we have , cf ., that } |\alpha_{k , l}| | \langle { \mathcal{k}}_{t , t } \phi , \sigma_{j}^{l}\rangle | \leq \epsilon^{1/2 } \|\phi\|,\end{aligned}\ ] ] for each , and any .since , the assertion of the lemma follows for .we next turn to implications of the form = y ] .let with as in .once again , with , we see that holds on . on the other hand , by , , , , and , we have } |\partial_t \langle { \mathcal{k}}_{t , t } \phi , y_j^m ( \bar{u } ) \rangle|^2 \right ) \leq c\epsilon \exp\left ( \frac{\eta}{2 } \|u_0\|^2\right ) { \mathbb{e}}\sup_{t\in [ t/2,t]}\|z_{j}^{m}(u)\|^{4 } \notag\\ & \leq c\epsilon |j|^{8}\exp\left ( \frac{\eta}{2 } \|u_0\|^2\right ) { \mathbb{e}}\left(1+\sup_{t\in[t/2,t]}\|u\|_{h^{2}}^8\right ) \leq c\epsilon |j|^{8}\exp\left ( \eta \|u_0\|^2\right ) \end{aligned}\ ] ] for any , where .for the third inequality above we have also used the estimate }\|z_{j}^{m}(u)\|_{h^s}\leq c|j|^{4+s } \left(1+\sup_{t\in[t/2,t]}\|u\|_{h^{s+2}}^2\right ) , \label{eq : z : newbound:1 } \end{aligned}\ ] ] which follows from by counting derivatives and applying the hlder and poincar inequalities .the constants in the exponents of and in the forthcoming lemmas [ lem : om : to : de : creation : step ] , [ lem : de : to : de : creation : step ] rapidly become ; however , there is nothing special about these numbers .we simply need to track that in the bounds and grow like and respectively for some .we next establish implications corresponding the chain of brackets ] ( see , ) .let , where is as in over with . then , on one has , in view of , } | \langle { \mathcal{k}}_{t , t } \phi , y_j^m(u)\rangle | \leq \epsilon \|\phi\| \quad \rightarrow \quad \sup_{t \in [ t/2,t ] } | \langle { \mathcal{k}}_{t , t } \phi , z_j^m(u)\rangle | \leq \epsilon^{1/10}\|\phi\| \,.\ ] ] by lemma [ lem : aux : aux ] with and , , we have }\|z_j^m(u)\|_{h^2}^{16 } \big)^{1/2 } + \big({\mathbb{e}}\|z^m_j(u)\|_{c^{1/4}h}^{16}\big)^{1/2}\big ] \notag \\ & \leq c\epsilon|j|^{48}\exp(\frac{\eta}{2 } \|u_0\|^2 ) \bigg[\big({\mathbb{e}}\big(1 + \sup_{t\in[t/2,t]}\|u\|_{h^{4}}^{32 } \big)\big)^{1/2 } \notag \\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad \quad + \big({\mathbb{e}}\big(\|u\|_{c^{1/4 } h^{2}}^{16 } \big(1 + \sup_{t\in[t/2,t]}\|u\|_{h^{2}}^{16 } \big ) \big)\big)^{1/2}\bigg ] \notag \\ & \leq c\epsilon|j|^{48}\exp(\eta \|u_0\|^2 ) \ , , \label{eq : est : om:1}\end{aligned}\ ] ] where and we used the bilinearity of with estimates like those leading to .next , by expanding we find w^{k ,l}. \label{eq : ztozbar}\end{aligned}\ ] ] in view of , all of the second order terms in of the form , \sigma_{k'}^{l'}]w^{k , l}w^{k',l'} ] .to estimate each of the terms in , we introduce for , by theorem [ thm : all : about : my : wiener ] , there exists a set such that and on } | \langle { \mathcal{k}}_{t , t } \phi , z_j^m(u)\rangle| \leq \epsilon^{1/10 } \quad \implies \quad \left\ { \begin{array}{rl } \it{either } & { \mathcal{n}}_0(\phi ) \leq \epsilon^{1/30 } , \\\it{or } & { \mathcal{n}}_1(\phi ) \geq \epsilon^{-1/90}. \end{array } \right.\end{aligned}\ ] ] recalling that , let by on the set we obtain the desired conclusion for each .thus it remains to estimate the size of . by , , and the markov inequality we have however , by and along with further estimates along the lines leading to we have ;\mathbb{r})}^{90 } & \leq c \exp(\eta/2 \|u_0\|^2 ) \left({\mathbb{e}}\sup_{t \in[t/2 , t ] } \|[z_j^m(\bar{u } ) , f(u)]\|^{180 } \right)^{1/2 } \notag \\ & \leq c \exp(\eta/2 \|u_0\|^2 ) |j|^{90\times 6 } \left({\mathbb{e}}(1 + \|u\|_{h^{4}}^{3\times 180})\right)^{1/2 } \notag \\ & \leq c\exp(\eta \|u_0\|^2 ) |j|^{90\times 6}\ , , \label{eq : ana : est}\end{aligned}\ ] ] where . finally , due to and similar applications of andthe estimate \rangle \right\|_{c^1([t/2,t];\mathbb{r})}^{90 } \leq c \exp(\eta \|u_0\|^2 ) |j|^{90\times 2 } \label{eq : ana : est:3}\ ] ] follows . by combining - we obtain , andthe proof is complete .the final lemma of this section corresponds to brackets of the form ] ( see ) .let , where is defined in .thus on we have ( invoking ) for each , and } | \langle { \mathcal{k}}_{t , t } \phi , y_j^m(u)\rangle | \leq \epsilon \quad \rightarrow \quad \sup_{t \in [ t/2,t ] } | \langle { \mathcal{k}}_{t , t } \phi , [ z_j^m(\bar{u}),f(u)]\rangle | \leq \epsilon^{1/300}. \label{eq : zbaru : f}\end{aligned}\ ] ] similarly as in the proof of lemma [ lem : om : to : de : creation : step ] , )}^{8\times 30}\right ) \leq c \epsilon |j|^{240 \times 6 } \exp ( \eta \|u_0\|^2)\,,\end{aligned}\ ] ] where the last inequality is analogous to estimates in .next , we establish with replaced by , for which we use the expansion .specifically , as in }|\langle { \mathcal{k}}_{t , t } \phi , [ [ z_j^m(\bar{u } ) , \sigma_k^l],f(u)]\rangle| |w^{k , l}(t)|\leq c |j| \epsilon \|\phi\| \sup_{t \in [ t/2 , t ] } |w^{k , l}(t)| \,.\end{aligned}\ ] ] since , markov inequality yields , with } |w^{k , l}(t)| \leq \epsilon^{-1/2}\} ] with respect to and again we use theorem 7.1 of to establish } | \langle { \mathcal{k}}_{t , t } \phi , [ z_j^m(u),f(u)]\rangle | \leq \epsilon^{1/600 } \|\phi\| \notag \\ & \rightarrow \sup_{l\in\{0,1\ } , k\in{\mathcal{z}}}\sup_{t\in[t/2,t ] } on a set satisfying the proof is finished if we set , and note by a and that ,\sigma_{k}^{l}]= [ [ z_j^m(u),f(u)],\sigma_{k}^{l}].\end{aligned}\ ] ] with all elements in figure [ fig : brak:2 ] now established , we explain how the lemmata are pieced together to conclude the proof of proposition [ prop : small : conclusion ] . to simplify the forthcoming calculations we denote by the power of , and the power of , appearing in the statement of lemma [ lem : de : to : de : creation : step ] , that is , and . then assertions of lemmata [ lem : zeroth : step ] [ lem : de : to : de : creation : step ] are of the form : for each with and for any sufficiently small one has on a set with , where and , are appropriate functionals .denote see figure [ fig : span : arg:1 ] .note that the choice to ` delete ' the corners of the triangular set is to assure that points in can be reached from points in using only moves depicted in figure [ fig : new : sig : directions ] .[ lem : small : sum ] let be as above , and for every denote then there exists such that for each , there is a set and with such that on for any , , , one has } & | \langle { \mathcal{k}}_{t , t } \phi , \sigma_j^m\rangle| \leq \epsilon^{p_{n } \kappa } \|\phi\| \quad \textrm{and } \quad \sup_{t\in[t/2,t ] } | \langle { \mathcal{k}}_{t , t } \phi , y_j^m(u)\rangle| \leq \epsilon^{p_{n } \kappa } \|\phi\| \ , .\label{eq : ind}\end{aligned}\ ] ] lemma [ lem : small : sum ] establishes the smallness of for all .the smallness of requires more work and it is discussed below in lemma [ lem : junk : e ] .we first remark that the function decreases , and therefore for each .we proceed by induction in . for the first step , ,we show that the result holds on the set . to this endwe first establish for , which are the directly forced modes in .indeed , lemma [ lem : zeroth : step ] and lemma [ lem : de : to : om : creation : step ] imply that holds for each , , with replaced by , on the set with .we now establish for . by with , and lemma [ lem : om :to : de : creation : step ] , one has } | \langle { \mathcal{k}}_{t , t } \phi , [ z_{j'}^m(u),\sigma_{k}^{l } ] \rangle | \leq \epsilon^{\kappa^3 } \|\phi\|,\end{aligned}\ ] ] on a set with using proposition [ prop : generatesigma ] with , , and all combinations of , we obtain ( since ) that on the set , for each , , } | \langle { \mathcal{k}}_{t , t } \phi , \sigma_{j}^m\rangle| \leq \frac{1}{2g}\epsilon^{\kappa^3 } \|\phi\| \leq \epsilon^{\frac{1}{2}\kappa^3 } \|\phi\| \leq \epsilon^{\kappa^4}\|\phi\|.\end{aligned}\ ] ] notice we have used the inequality . the first part of follows with . the second part of with follows from the first part and lemma [ lem : de : to : om : creation : step ] on so that .this completes the proof of the base case .next we establish the inductive step .that is , assuming holds for each ( with ) on a set , we will show that holds true for on a set .we introduce the set which is the ` boundary of excluding the and axes ' as illustrated by the broken line segments in figure [ fig : span : arg:1 ] .denote then the inductive hypothesis and lemma [ lem : om : to : de : creation : step ] imply then on the set we have } | \langle { \mathcal{k}}_{t , t } \phi , [ z_{j'}^m(u),\sigma_{k}^{l } ] \rangle | \leq \epsilon^{p_{n-1}\kappa } \|\phi\| .\label{eq : name1}\end{aligned}\ ] ] to complete the inductive step it is enough to establish for any fixed and .we observe that for each , there exists such that .in other words , any point in can be reached from via ` allowable directions ' as shown in figure [ fig : new : sig : directions ] .since is parallel to one of the axes and is not , we have and , , where , are defined by . using and proposition [ prop : generatesigma ], we infer that on the set , for each fixed , ( for our choice of ) } | \langle { \mathcal{k}}_{t , t } \phi , \sigma_j^m\rangle| \leq \frac{2n^{2}}{g}\epsilon^{p_{n-1}\kappa } \|\phi\| \leq \epsilon^{p_{n-1}\kappa/2 } \|\phi\| \ , , \label{eq : prelim : ind : step : imp}\ ] ] where we used that , , and .( estimates for are analogous ) we observe that and the desired bound follows when . if , then . ] to complete the induction it remains to establish the second part of .define we then obtain ( analogously to ) that on , and lemma [ lem : de : to : om : creation : step ] yield that holds with the desired .[ lem : junk : e ] fix and let and be as in lemma [ lem : small : sum ] .there is then for every and there exists a set with where , such that on the set for all , with , and , one has _ 0,t , ^2 | , _ j^m| ^p_2n , [ eq : ind:3 ] + for the definition of see and . below , without further notice , we use that for appropriate , where is as in lemma [ lem : small : sum ] . observe that implies , because then by lemma [ lem : small : sum ] , for each such that , on the set , with implies and follows . to establish we first fix with and .then , by lemma [ lem : small : sum ] , , and , on the set , next , fix with and set .it is easy to check that , , belong to whenever , so that , by the second part of , is satisfied ( with replaced by ) on the set . then by lemma [lem : de : to : de : creation : step ] ( the smallness conditions on required by lemma [ lem : de : to : de : creation : step ] are satisfied if for appropriate ) , } | \langle { \mathcal{k}}_{t , t } \phi,[z_{j'}^m(u),y_{k}^{l}(u)]\rangle | \leq \epsilon^{p_{2n}\kappa } \|\phi\| , \label{eq : calc : part:3}\ ] ] on the set with . then by and ( with ) , on one has combining both cases , and , one has for any , on the set , that since by lemma [ lem : junk : est : basic ] the first component of vanishes , there exists such that consequently , by and lemma [ lem : junk : est : basic ] , on the set , if we set , then and the markov inequality imply where . combining and , on , it holds that provided .finally , notice that by lemma [ lem : junk : e ] on a set one has whenever .in this final section we show how the abstract results developed in ( and cf . ) can be applied in our setting to establish mixing and pathwise attraction properties for the unique invariant measure associated to and to thus complete the proof of the main result theorem [ thm : mainresult ] .we begin by introducing some notations . for any ] with such that .here we simply set . ] for every , ] . * a gradient estimate on the markov semigroup ,, holds for . * given any , , and , there exists such that for any , here , is the dirac measure concentrated at and is defined in .then there exist positive constants such that for every and every .moreover , there exists a unique invariant measure ( that is for every ) and which holds for every and every ( cf . ) .[ thm : pathwise : conv : abs ] let be a feller markov semigroup on a metric space with the continuity property : for all , .let be the associated transition functions ( cf . ) .suppose that for some the contraction property holds for every .assume moreover that , for every , ) given for the results appearing in . ] ^ 3 p_t(u_0 , du ) < \infty\ , , \label{eq : polish : style : moment : bounds } \end{aligned}\ ] ] where .then , there exists a unique invariant probability measure such that , for any and any moreover , the limit exists and where is the distribution function of a normal random variable with mean zero and variance .using theorem [ thm : hm : mixing : thm],[thm : pathwise : conv : abs ] we now establish the attraction properties ( i)(iii ) to complete the proof of theorem [ thm : mainresult ] .we begin by establishing the conditions for ( a)(c ) of theorem [ thm : hm : mixing : thm ] . to prove ( a ) , note that for any , with implies we have for any ]. then by the continuous dependence of solutions with respect to external forcing we conclude for sufficiently small .more precisely we can use the change of variable and standard estimates to show that .we now establish from as follows . for and by from it follows that , that is , couples and .as such , using and , we infer that , for any , , and any where and follows . having established the conditions ( a)(c ) in theorem [ thm : hm : mixing : thm ] we infer and for . to prove it suffices to show that , where is the unique invariant measure of . however , and therefore it suffices to show where . for any , define and note that . now using that and we have for any , now since is arbitrary we infer that for that where is independent of . passing andthen and using the monotone convergence theorem we conclude and follows . the remaining convergence properties , followonce we show that the conditions for theorem [ thm : pathwise : conv : abs ] are met .the feller property and stochastic continuity of follow immediately from the well - posedness properties of as recalled above in proposition [ prop : wellposed ] .it remains to verify the bound in . by and we have for already fixed and for any and any that ^ 3 p_t(u_0 , du ) & \leq \int \exp(3 { \varsigma}\|u\|^2 ) \|u\|^3 p_t(u_0 , du ) \leq c \int \exp(\eta^\ast \|u\|^2 ) p_t(u_0 , du ) \\ & = c { \mathbb{e}}(\exp ( \eta^\ast \|u(t , u_0)\|^2 ) ) \leqc \exp ( \eta^ * \|u_0\|^2)\,,\end{aligned}\ ] ] where the constant is independent of and .thus , in view of , we infer the bound and hence the convergences , .this completes the proof of theorem [ thm : mainresult ] .section [ sec : moment : est ] collects various moment estimates for . in section [ subsect : malliavin ] we briefly review of some aspects of the malliavin calculus relevant to our analysis above . in this sectionwe provide details for the moments bound used throughout the manuscript . as above the dependence on physical parameter in constantsis suppressed in what follows , see remark [ rmk : constants : conven ] .denote then , cf . , also recall , that our domain is , and therefore the poincar ` e inequality takes the form .most of the forthcoming bounds have previously been obtained in the context of the stochastic navier - stokes equations and some other nonlinear spdes with a dissipative ( parabolic ) structure . in order to modify them for the boussinesq system, we need to compensate for the ` buoyancy ' term when carrying out energy estimates .this is accomplished by differently weighting the temperature and momentum equations .we illustrate this strategy in the proof of ; proofs of other estimates use the same approach in combination with a straightforward modification of existing methods for the ( stochastic ) navier - stokes equation ( see e.g. ) and they are omitted . in the first lemma we state a priori bounds on .these estimates reflect parabolic type regularization properties of , and are particularly useful for obtaining spectral bounds on the malliavin matrix carried out in section [ sec : mal : spec : bnds][sec : braket : est : mal : mat ] .[ lem : exp : moments ] fix any and let be the unique solution of with .denote .there exists such that : * for any and ] , where are positive constants independent of and . * for any , , and ] has finite -th moment for any , see .[ rmk : stupid : rho ] in the following estimates appears only on the right hand side , and therefore they remain valid if is increased , thus we do not assume any upper bound on .next lemmata collect estimates on linearizations of . recall the definitions of the operators and its adjoint given in and respectively .moreover , for any let be the second derivative of with respect to an initial condition . observe that for fixed and any the function is the solution of [ lem : lin : moments ] for each and , we have the pathwise estimate where is independent of . moreover , for each , , and there is such that }\| { \mathcal{j}}_{s , t } \|^p & \leq c \exp \left ( \eta \|u_0\|^2\right ) , \label{eq : lin : growth : est } \\ { \mathbb{e}}\sup_{s < t \in [ \tau , t ] } \|{\mathcal{k}}_{s , t } \|^p & \leq c \exp ( \eta \|u_0\|^2 ) , \label{eq : back :lin : est:1}\\ { \mathbb{e}}\sup_{s < t \in [ \tau , t ] } \| { \mathcal{j}}^{(2)}_{s , t } \|^p & \leq c \exp \left ( \eta \|u_0\|^2 \right)\ , .\label{eq : second : esti}\end{aligned}\ ] ] proof of follows along the same lines as ( * ? ? ?* lemma 4.10.3 ) . by taking expectation , follows from , , and .finally , follows from by duality and is similar to ( * ? ? ?* lemma 4.10.4 ) .the next lemma provides us with estimates to initial time in a weak norm , which allows us to avoid some technical arguments in section [ sec : implications : eigenvals ] ( cf . ) . for any , , and there is such that } \|{\partial}_{t } { \mathcal{k}}_{t , t } \xi \|^p_{h^{-2 } } \leq c \exp \left ( \eta \|u_0\|^2 \right ) \|\xi\|^{p } \ , .\label{eq : lin : smooth}\end{aligned}\ ] ] recall that solves and notice that for any .since and then follows from and . the next lemma is a version of the foias - prodi estimate , , used in this work .specifically the estimate is employed in the decay estimate .[ lem : parabolic : cont : high : modes : jop ] for every , , there exists , such that for any one has ( recall that was defined in ) the proof is analogous to ( * ? ? ?* lemma 4.17 ) and it essentially follows from the fact that has compact resolvent ( see ( * ? ? ? * proof of theorem 8.1 ) ) .we next present estimates on the operators and the inverse of the regularized malliavin matrix that are primarily used in section [ sec : proof : main : prop ] .[ lem : claim ] for define , , and by to , , and respectively. then ; { \mathbb{r}}^d ) , h ) } \leq c\left ( \int_s^t \| { \mathcal{j}}_{r , t } \|^2 dr\right)^{1/2 } \label{eq : gen : aaa : bnd}\end{aligned}\ ] ] for a constant independent of .moreover , ; { \mathbb{r}}^d ) ) } & \leq 1 \ , , \label{eq : op : norm:1 } \\\|({\mathcal{m}}_{s , t } + i \beta)^{-1/2 } { \mathcal{a}}_{s , t } \|_{\mathcal{l}(l^2([s , t ] ; { \mathbb{r}}^d ) , h ) } & \leq 1 \ , , \label{eq : op : norm:2}\\ \|({\mathcal{m}}_{s , t } +i \beta)^{-1/2 } \|_{\mathcal{l}(h , h ) } & \leq \beta^{-1/2 } \ , .\label{eq : op : norm:3}\end{aligned}\ ] ] here , denotes the operator norm of the linear map between the given hilbert spaces and .the first bound follows from the definition of and hlder s inequality .next , since is self - adjoint ; { \mathbb{r}}^d)}\end{aligned}\ ] ] for any . setting ( is invertible ) we immediately obtain and .finally , follows from by duality . forthe ` cost of control ' bounds on is section [ sec : proof : main : prop ] we also made use of bounds on the malliavin derivative of the random operators , , and for . observe that for ( see ) we refer the reader to appendix [ subsect : malliavin ] for further details on the malliavin derivative operator and the associated spaces on which it acts .[ lem : mal : est ] for any the random operators , , are differentiable in the malliavin sense .moreover , for any and we have the bounds , { \mathbb{r}}^d),h ) } & \leq c \exp ( \eta \|u_0\|^2 ) \ , , \label{eq : mal : a : est } \\ { \mathbb{e}}\|{\mathfrak{d}}_{\tau}^j { \mathcal{a}}_{s , t}^\ast\|^p_{\mathcal{l}(h , l^2([s , t ] , { \mathbb{r}}^d ) ) } & \leq c \exp ( \eta \|u_0\|^2)\ , , \label{eq : mal : a : star : est}\end{aligned}\ ] ] where .the proof of is based on the observation and the bound .further details can be found in . in this section ,we recall in our context and notations some elements of the malliavin calculus used in above . for further general background on this vast subject see , for example , .fix a stochastic basis , where is a -dimensional standard brownian motion , is a filtration to which this process is adapted . in application to , represents the number of independent noise processes driving the system .fix any .we first recall the definition of the _ malliavin derivative _ which is defined on a subset of for ( see or ) .we begin by explaining how this operator acts on ` smooth random variables ' . for any given , consider a schwartz function , that is , that satisfies for any multi - indices . for such functionsdefine , by where are deterministic elements in , { \mathbb{r}}^d) ] . to extend to a broader class of elementswe adopt the norm ; { \mathbb{r}}^d)}^p,\end{aligned}\ ] ] and denote be the closure of the above defined functions under this norm .we can repeat the above construction for random variables taking values in a separable hilbert space . in this case start by considering ` elementary ' functions of the form where is a finite index set , , are schwartz functions , elements in and , as above , , are deterministic elements in , { \mathbb{r}}^d) ] . with a slight abuse of notationwe denote , { \mathbb{r}}^d ) \otimes \mathcal{h}}^p.\end{aligned}\ ] ] as above , we take to be the closure of the functions of the form under the norm . for , we adopt the notations , \quad { \mathfrak{d}}^j f : = ( { \mathfrak{d}}f)^j , j = 1 , \ldots d,\end{aligned}\ ] ] i.e. is the component of as an element in ( or ) , for fixed ] , then with these basic definitions in place , we now introduce two important elements of malliavin s theory , the chain rule and the integration by part formula . the malliavin chain rule states that for any ( continuously differentiable functions with bounded derivatives ) and with one has that and see e.g. ( * ? ? ?* proposition 1.2.3 ) .note also that this chain rule extends to the hilbert space setting ; if and then and .we used a more general form of the product rule that can be found in e.g. . ] next , we introduce the malliavin integration by part formula which can be understood in terms of the adjoint operator to . for , { \mathbb{r}}^d) ] by , { \mathbb{r}}^d ) } = { \mathbb{e}}(f { \mathfrak{d}}^\ast v ) \ , , \label{eq : m : ibp : rule}\end{aligned}\ ] ] for any and any .if has the form we define , { \mathbb{r}}^d ) } : = \sum_{i \in \mathcal{i } } { \mathbb{e}}\langle { \mathfrak{d}}f_i , v \rangle_{l^2([0,t ] , { \mathbb{r}}^d ) } h_i = \sum_{i \in \mathcal{i } } { \mathbb{e}}(f_i { \mathfrak{d}}^\ast v ) h_i = { \mathbb{e}}\left ( \sum_{i \in \mathcal{i } } f_i h_i { \mathfrak{d}}^\ast v\right ) = { \mathbb{e}}(f { \mathfrak{d}}^\ast v ) \,,\end{aligned}\ ] ] and therefore , after passing to the limit , we see that holds true for valued elements and , { \mathbb{r}}^d)) ] .the map is called the_ skorokhod integral _ ( see ) andis often written as so that reads as , { \mathbb{r}}^d ) } = { \mathbb{e}}\left(f \int_0^t v \cdot dw\right ) .\label{eq : m : ibp : rule : trad : notation}\end{aligned}\ ] ] the reason behind this notation is that if , { \mathbb{r}}^d) ] valued random variable , we have that ; { \mathbb{r}}^d ) ) \subset \textrm{dom}({\mathfrak{d}}^\ast) ] , then ;{\mathbb{r}}^d )\otimes l^2([0,t];{\mathbb{r}}^d ) ) = l^2(\omega;l^2([0,t]^2 ; { \mathbb{r}}^{d \times d}))$ ] and the generalized it isometry takes the form : , { \mathbb{r}}^d ) } + { \mathbb{e}}\int_0^t \int_0^t \mbox{tr } ( { \mathfrak{d}}_s v(r ) { \mathfrak{d}}_r v(s ) ) ds dr\ , , \label{eq : gen : ito : ineq}\end{aligned}\ ] ] see e.g. ( * ? ? ?* chapter 1 , ( 1.54 ) ) . in view of, the classical it isometry is recovered from when is -adapted .more generally such observations concerning the measurability of in conjunction with , are used in a crucial fashion to obtain the bound .the authors gratefully acknowledge the support of the institute for mathematics and its applications ( i m a ) where this work was conceived .jf , ngh , gr were postdoctoral fellows and et was a new directions professor during the academic year 2012 - 2013 .we have also benefited from the hospitality of the department of mathematics virginia tech and from the newton institute for mathematical sciences , university of cambridge where the final stage of the writing was completed .ngh s work has been partially supported under the grant nsf - dms-1313272 .we would like to thank t. beale , c. doering , d. faranda , j. mattingly , v. ver ' ak , and v. vicol for numerous inspiring discussions along with many helpful references .we would also like to express our appreciation to l. capogna and h. bessaih for helping us to initiate the reading group at the i m a that eventually lead to this work .s. albeverio , f. flandoli , and y. g. sinai , _ spde in hydrodynamic : recent progress and prospects _ , lecture notes in mathematics , vol .1942 , springer - verlag , berlin , 2008 , lectures given at the c.i.m.e .summer school held in cetraro , august 29september 3 , 2005 , edited by giuseppe da prato and michael rckner .mr 2459087 ( 2009g:76120 ) j .- m .martingales , the malliavin calculus and hrmander s theorem _ , stochastic integrals ( proc .durham , durham , 1980 ) , lecture notes in math .851 , springer , berlin , 1981 , pp . 85109 .mr 620987 ( 82h:60114 ) j. bricmont , a. kupiainen , and r. lefevere , _ ergodicity of the 2d navier - stokes equations with random forcing _* 224 * ( 2001 ) , no . 1 , 6581 , dedicated to joel l. lebowitz .mr 1868991 ( 2003c:76032 ) e. bodenschatz , w. pesch , and g. ahlers , _ recent developments in rayleigh - bnard convection _ , annual review of fluid mechanics , vol .32 , annu .fluid mech . , vol .32 , annual reviews , palo alto , ca , 2000 , pp . 709778 .mr 1744317 ( 2000k:76059 ) j. r. cannon and e. dibenedetto , _ the initial value problem for the boussinesq equations with data in _ , approximation methods for navier - stokes problems ( proc .paderborn , paderborn , 1979 ) , lecture notes in math .771 , springer , berlin , 1980 , pp .mr 565993 ( 81f:35101 ) d. chae and o. y. imanuvilov , _ generic solvability of the axisymmetric -d euler equations and the -d boussinesq equations _ , j. differential equations * 156 * ( 1999 ) , no . 1 , 117 .mr 1700862 ( 2000d:76009 ) g. da prato , m. rockner , b. l rozovskii , and f. y. wang , _ strong solutions of stochastic generalized porous media equations : existence , uniqueness , and ergodicity _ , communications in partial differential equations * 31 * ( 2006 ) , no . 2 , 277291 . g. da prato and j. zabczyk , _stochastic equations in infinite dimensions _ , encyclopedia of mathematics and its applications , vol . 44 , cambridge university press , cambridge , 1992 .mr mr1207136 ( 95g:60073 ) w. e and j. c. mattingly , _ergodicity for the navier - stokes equation with degenerate random forcing : finite - dimensional approximation _ , comm .pure appl . math .* 54 * ( 2001 ) , no . 11 , 13861402 .mr 1846802 ( 2002g:76075 ) w. e , j. c. mattingly , and y. g .. sinai , _ gibbsian dynamics and ergodicity for the stochastically forced navier - stokes equation _ , comm . math .* 224 * ( 2001 ) , no . 1 , 83106 , dedicated to joel l. lebowitz .mr 1868992 ( 2002m:76024 ) c. foias , m. s. jolly , o. p. manley , and r. rosa , _ statistical estimates for the navier - stokes equations and the kraichnan theory of 2-d fully developed turbulence _, j. statist .* 108 * ( 2002 ) , no . 3 - 4 , 591645 .mr 1914189 ( 2004k:76067 ) c. foias , o. manley , r. rosa , and r. temam , _ navier - stokes equations and turbulence _ , encyclopedia of mathematics and its applications , vol .83 , cambridge university press , cambridge , 2001 .mr 1855030 ( 2003a:76001 ) c. foia and g. prodi , _ sur le comportement global des solutions non - stationnaires des quations de navier - stokes en dimension _ , rend .sem . mat .padova * 39 * ( 1967 ) , 134 .mr 0223716 ( 36 # 6764 ) m. hairer , j. c. mattingly , and m. scheutzow , _ asymptotic coupling and a general form of harris theorem with applications to stochastic delay equations _, probab . theory related fields * 149 * ( 2011 ) , no . 1 - 2 , 223259 .mr 2773030 n. ikeda and s. watanabe , _ stochastic differential equations and diffusion processes _ , second ed ., north - holland mathematical library , vol .24 , north - holland publishing co. , amsterdam , 1989 .mr 1011252 ( 90m:60069 ) n. kryloff and n. bogoliouboff , _ la thorie gnrale de la mesure dans son application ltude des systmes dynamiques de la mcanique non linaire _ , ann . of math .( 2 ) * 38 * ( 1937 ) , no . 1 , 65113 .mr 1503326 r. z. khas minskii , _ ergodic properties of recurrent diffusion processes and stabilization of the solution to the cauchy problem for parabolic equations _ , theory of probability & amp ; its applications * 5 * ( 1960 ) , no . 2 , 179196 .s. kusuoka and d. stroock , _ applications of the malliavin calculus .i _ , stochastic analysis ( katata / kyoto , 1982 ) , north - holland math .library , vol .32 , north - holland , amsterdam , 1984 , pp .mr 780762 ( 86k:60100a ) o.m.f.r.s .lord rayleigh , _ on convection currents in a horizontal layer of fluid , when the higher temperature is on the under side _, philosophical magazine series 6 * 32 * ( 1916 ) , no . issue 192 , 529546. p. malliavin , _ stochastic calculus of variation and hypoelliptic operators _ , proceedings of the international symposium on stochastic differential equations ( res .inst . math .kyoto univ . ,kyoto , 1976 ) ( new york ) , wiley , 1978 , pp .mr 536013 ( 81f:60083 ) to3em , _ exponential convergence for the stochastically forced navier - stokes equations and other partially dissipative dynamics _ , comm .* 230 * ( 2002 ) , no . 3 , 421462 .mr 1937652 ( 2004a:76039 ) a. naso , s. thalabard , g. collette , p. h. chavanis , and b. dubrulle , _ statistical mechanics of beltrami flows in axisymmetric geometry : equilibria and bifurcations _ , journal of statistical mechanics : theory and experiment * 2010 * ( 2010 ) , no . 06 , p06019 .to3em , _ malliavin calculus and its applications _ , cbms regional conference series in mathematics , vol . 110 , published for the conference board of the mathematical sciences , washington , dc , 2009 . mr 2498953 ( 2010b:60164 ) m. romito , _ ergodicity of the finite dimensional approximation of the 3d navier - stokes equations forced by a degenerate noise _, j. statist .* 114 * ( 2004 ) , no . 1 - 2 , 155177 .mr 2032128 ( 2005a:76128 ) b. l. rozovski , _ stochastic evolution systems _ , mathematics and its applications ( soviet series ) , vol .35 , kluwer academic publishers group , dordrecht , 1990 , linear theory and applications to nonlinear filtering , translated from the russian by a. yarkho .mr 1135324 ( 92k:60136 ) d. w. stroock , _ the malliavin calculus and its applications _ , stochastic integrals ( proc . sympos .durham , durham , 1980 ) , lecture notes in math .851 , springer , berlin , 1981 , pp .mr 620997 ( 82k:60092 ) to3em , _ a note on long time behavior of solutions to the boussinesq system at large prandtl number _ , nonlinear partial differential equations and related analysis , contemp .371 , amer .soc . , providence , ri , 2005 , pp .mr 2143874 ( 2006a:76105 ) to3em , _ asymptotic behavior of the global attractors to the boussinesq system for rayleigh - bnard convection at large prandtl number _ ,pure appl .* 60 * ( 2007 ) , no . 9 , 12931318 .mr 2337505 ( 2009a:35196 ) 2 juraj fldes + institute for mathematics and its applications ( i m a ) + university of minnesota + web : ima.umn.edu/~foldes/ +email : foldes.umn.edu + nathan glatt - holtz + department of mathematics + virginia polytechnic institute and state university + web : www.math.vt.edu/people/negh/ + email : negh.edu + geordie richards + department of mathematics + university of rochester + web : www.math.rochester.edu/grichar5/ + email : grichar5.rochester.edu + enrique thomann + department of mathematics + oregon state university + web : www.math.oregonstate.edu/people/view/thomann + email : thomann.orst.edu
|
we establish the existence , uniqueness and attraction properties of an ergodic invariant measure for the boussinesq equations in the presence of a degenerate stochastic forcing acting only in the temperature equation and only at the largest spatial scales . the central challenge is to establish time asymptotic smoothing properties of the markovian dynamics corresponding to this system . towards this aim we encounter a lie bracket structure in the associated vector fields with a complicated dependence on solutions . this leads us to develop a novel hrmander - type condition for infinite - dimensional systems . demonstrating the sufficiency of this condition requires new techniques for the spectral analysis of the malliavin covariance matrix . -4 mm -1 mm -1 mm -1 mm -1 mm
|
stochastic volatility models of the financial time series are fundamental to investment , option pricing and risk management .the volatility serves as a quantitative price diffusion measure in widely accepted stochastic multiplicative process known as geometric brownian motion ( gbm ) .extensive empirical data analysis of big price movements in the financial markets confirms the assumption that volatility itself is a stochastic variable or more generally the function of a stochastic variable . by analogy with physicswe can assume that speculative prices change in a `` random medium '' described by the random diffusion coefficient .such analogy may be reversible from the point that the complex models of stochastic price movements can be applicable for the description of complex physical systems such as stochastic resonance , noise induced phase transitions and high energy physics applications .this analogy contributes to further development of statistical mechanics - nonextensive one and superstatistics have been introduced .additive - multiplicative stochastic models of the financial mean - reverting processes provide rich spectrum of shapes for the probability distribution function ( pdf ) depending on the model parameters .such stochastic processes model the empirical pdf s of volatility , volume and price returns with success when the appropriate fitting parameters are selected .nevertheless , there is the necessity to select the most appropriate stochastic models able to describe volatility as well as other variables under the dynamical aspects and the long range correlation aspects .there is empirical evidence that trading activity , trading volume , and volatility are stochastic variables with the long - range correlation and this key aspect is not accounted for in widespread models .moreover , rather often there is evidence that the models proposed are characterized only by the short - range time memory .phenomenological descriptions of volatility , known as heteroscedasticity , have proven to be of extreme importance in the option price modeling .autoregressive conditional heteroscedasticity ( arch ) processes and more sophisticated structures garch are proposed as linear dependencies on previous values of squared returns and variances .these models based on empirically fitted parameters fail in reproducing power law behavior of the volatility autocorrelation function .we do believe that the stochastic models with the limited number of parameters and minimum stochastic variables are possible and would better reflect the market dynamics and its response to the external noise .recently we investigated analytically and numerically the properties of stochastic multiplicative point processes , derived formula for the power spectrum and related the model with the general form of multiplicative stochastic differential equation .preliminary the comparison of the model with the empirical data of spectrum and probability distribution of stock market trading activity stimulated us to work on the definition of more detailed model .the extensive empirical analysis of the financial market data , supporting the idea that the long - range volatility correlations arise from trading activity , provides valuable background for further development of the long - ranged memory stochastic models .we will present the stochastic model of trading activity with the long - range correlation and will investigate its connection to the stochastic modeling of volatility and returns .earlier we proposed the stochastic point process model , which reproduced a variety of self - affine time series exhibiting the power spectral density scaling as power of the frequency .the time interval between point events in this model fluctuates as a stochastic variable described by the multiplicative iteration equation here interevent time between subsequent events and fluctuates due to the random perturbation by a sequence of uncorrelated normally distributed random variable with the zero expectation and unit variance , denotes the standard deviation of the white noise and is a coefficient of the nonlinear damping .it has been shown analytically and numerically that the point process with stochastic interevent time may generate signals with the power - law distributions of the signal intensity and noise .the corresponding ito stochastic differential equation for the variable as a function of the actual time can be written as where is a standard random wiener process .. describes the continuous stochastic variable which can be assumed as slowly diffusing mean interevent time of poisson process with the stochastic rate .we put the modulated poisson process into the background of the long - range memory point process model .the diffusion of must be restricted at least from the side of high values .therefore we introduce a new term into the eq . , which produces the exponential diffusion reversion in equation \tau^{2\mu-2}\rmd t+\sigma\tau^{\mu-1/2}\rmd w , \label{eq : taustoch2}\ ] ] where and are the power and value of the diffusion reversion , respectively .the associated fokker - plank equation with the zero flow will give the simple stationary pdf \label{eq : taudistrib}\ ] ] with , where .we define the conditional probability of interevent time in the modulated poisson point process with stochastic rate as .\label{eq : taupoisson}\ ] ] then the long time distribution of interevent time has the integral form \tau^{\alpha}\exp\left[-\left(\frac{\tau}{\tau_{0}}\right)^m\right]\rmd \tau,\label{eq : taupdistrib}\ ] ] with defined from the normalization , . in the case of pure exponential diffusion reversion , , pdf has a simple form where denotes the modified bessel function of the second kind .for more complicated structures of distribution expressed in terms of hypergeometric functions arise .the introduced stochastic multiplicative model of interevent time , the interval between trades in the financial market , defines the model of event flow .first of all we apply ito transformation of variables introducing flow of events .the stochastic differential equation for follows from eq ., ^{2\eta-1}\rmd t+\sigma n^{\eta}\rmd w , \label{eq : nstoch}\ ] ] where and .. describes stochastic process with pdf and power spectrum noteworthy , that in the proposed model only two parameters , and ( or ) , define exponents and of two power - law statistics , i.e. of pdf and power spectrum .time scaling parameter in eq . can be omitted adjusting the time scale .stochastic variable denotes the number of events per unit time interval .one has to integrate the stochastic signal eq . in the time interval to get number of events in the selected time window . in this paperwe will denote the integrated number of points or events as and will call it trading activity in the case of the financial market. flow of points or events arises in different fields , such as physics , economics , cosmology , ecology , neurology , the internet , seismology , i.e. , electrons , photons , cars , pulses , events , and so on , or subsequent actions , like seismic events , neural action potentials , transactions in the financial markets , human heart beats , biological ion - channel openings , burst errors in many communication systems , the internet network packets , etc .we will discuss possible application of the proposed stochastic model to model the trading activity in the financial markets .it is widely accepted that in high - frequency financial data not only the returns but also the waiting times between the consecutive trades are random variables .waiting times between trades do not follow the exponential distribution and the related point process is not the poisson one .the extensive empirical analysis provides evidence that the related stochastic variable trading activity defined as flow of trades is stochastic variable with the long range memory .we will investigate how the proposed modulated poisson stochastic point process can be adjusted to model trading activity with the empirically defined statistical properties . detrended fluctuation analysis is one of the methods to define the second order statistics , the autocorrelation of trading activity .the histogram of the detrended fluctuation analysis exponents obtained by fits for each of the 1000 us stocks shows a relatively narrow spread of around the mean value .we use relation between the exponents of detrended fluctuation analysis and the exponents of power spectrum and in this way define the empirical value of the exponent for the power spectral density .our analysis of the lithuanian stock exchange data confirmed that the power spectrum of trading activity is the same for various liquid stocks even for the emerging markets .the histogram of exponents obtained by fits to the cumulative distributions of trading activites of 1000 us stocks gives the value of exponent describing the power - law behavior of the trading activity .empirical values of and confirm that the time series of the trading activity in real markets are fractal with the power law statistics .time series generated by stochastic process are fractal in the same sense .nevertheless , we face serious complications trying to adjust model parameters to the empirical data of the financial markets . for the pure multiplicative model ,when or , we have to take to get and to get , i.e. it is impossible to reproduce the empirical pdf and power spectrum with the same relaxation parameter and exponent of multiplicativity .we have proposed possible solution of this problem in our previous publications deriving pdf for the trading activity when this yields exactly the required value of and for .nevertheless , we can not accept this as the sufficiently accurate model of the trading activity as the empirical power law distribution is achieved only for very high values of the trading activity .probably this reveals the mechanism how the power law distribution converges to normal distribution through the growing values of the exponent , but empirically observed power law distribution in wide area of values can not be reproduced .let us notice here that the desirable power law distribution of the trading activity with the exponent may be generated by the model with and .moreover , only the smallest values of or high values of contribute to the power spectral density of trading activity .this suggests us to combine the point process with two values of : ( i ) for the main area of diffusing and and ( ii ) for the lowest values of or highest values of .therefore , we introduce a new stochastic differential equation for combining two powers of multiplicative noise , \frac{n^4}{(n\epsilon+1)^2}\rmd t+\frac{\sigma n^{5/2}}{(n\epsilon+1)}\rmd w , \label{eq : nstoch2}\ ] ] where a new parameter defines crossover between two areas of diffusion. the corresponding iterative equation of form for in such a case is \frac{\tau_{k}}{(\epsilon+\tau_{k})^2}+\sigma\frac{\tau_{k}}{\epsilon+\tau_{k}}\varepsilon_{k}. \label{eq : tauiterat2}\ ] ] eqs . and define related stochastic variables and , respectively , and they should reproduce the long range statistical properties of the trading activity and of waiting time in the financial markets . we verify this by the numerical calculations . in figure [ fig:1 ]we present the power spectral density calculated for the equivalent processes and ( see for details of calculations ) .this approach reveals the structure of the power spectral density in wide range of frequencies and shows that the model exhibits not one but rather two separate power laws with the exponents and . from many numerical calculations performed with the multiplicative point processes we can conclude that combination of two power laws of spectral density arise only when multiplicative noise is a crossover of two power laws , see and .we will show in the next section that this may serve as an explanation of two exponents of the power spectrum in the empirical data of volatility for ` s&p 500 ` companies .empirical data of the trading activity statistics must be modeled by the integrated flow of event defined in the time interval . in figure [ fig:2 ]we demonstrate the cumulative probability distribution functions calculated from the histogram of generated by eq . with increasing time interval .this illustrates how distribution of the integrated signal converges to the normal distribution ( the central limit theorem ) through growing exponent of the power - law distribution and provides an evidence that the empirically observed exponent of the power - law distribution of can be explained by the proposed model with the same parameters suitable for description of the power spectrum of the trading activity .the power spectrum of the trading activity can be calculated by the fast fourier transform of the generated numerical series . as illustrated in figure [ fig:3 ] , the exponents of the power spectrum are independent of and reproduce the empirical results of the detrended fluctuation analysis .the same numerical results can be reproduced by continuous stochastic differential equation or iteration equation .one can consider the discrete iterative equation for the interevent time as a method to solve numerically continuous equation \frac{1}{(\epsilon+\tau)^2}\rmd t+\sigma\frac{\sqrt{\tau}}{\epsilon+\tau}\rmd w. \label{eq : taucontinuous}\ ] ] the continuous equation follows from the eq .after change of variables . we can conclude that the long range memory properties of the trading activity in the financial markets as well as the pdf can be modeled by the continuous stochastic differential equation . in this modelthe exponents of the power spectral density , , and of pdf , , are defined by one parameter .we consider the continuous equation of the mean interevent time as a model of slowly varying stochastic rate in the modulated poisson process .\label{eq : taupoisson}\ ] ] in figure [ fig:4 ] we demonstrate the probability distribution functions calculated from the histogram of generated by eq . with the diffusing mean interevent time calculated from eq . .numerical results show good qualitative agreement with the empirical data of interevent time probability distribution measured from few years series of u.s .stock data .this enables us to conclude that the proposed stochastic model captures the main statistical properties including pdf and the long range correlation of the trading activity in the financial markets .furthermore , in the next section we will show that this may serve as a background statistical model responsible for the statistics of return volatility in widely accepted gbm of the financial asset prices .we follow an approach developed in to analyze the empirical data of price fluctuations driven by the market activity .the basic quantities studied for the individual stocks are price and return return over a time interval can be expressed through the subsequent changes due to the trades in the time interval $ ] , we denote the variance of calculated over the time interval as . if are mutually independent one can apply the central limit theorem to sum .this implies that for the fixed variance return is a normally distributed random variable with the variance empirical test of conditional probability confirms its gaussian form , and the unconditional distribution is a power - law with the cumulative exponent .this implies that the power - law tails of returns are largely due to those of .here we refer to the theory of price diffusion as a mechanistic random process . for this idealized modelthe short term price diffusion depends on the limit order removal and this way is related to the market order flow .furthermore , the empirical analysis confirms that the volatility calculated for the fixed number of transactions has the long memory properties as well and it is correlated with real time volatility .we accumulate all these results into strong assumption that standard deviation may be proportional to the square root of the trading activity , i.e. , .this enables us to propose a very simple model of return and related model of volatility based on the proposed model of trading activity .we generate series of trade flow numerically solving eq . with variable steps of time and calculate the trading activity in subsequent time intervals as .this enables us to generate series of return , of volatility and of the averaged volatility . in figure [ fig:5 ]we demonstrate cumulative distribution of and the power spectral density of calculated from fft .we see that proposed model enables us to catch up the main features of the volatility : the power law distribution with exponent and power spectral density with two exponents and .this is in a good agreement with the empirical data .earlier proposed stochastic point process model as a possible model of trading activity in the financial markets has to be elaborated .first of all , we define that the long - range memory fluctuations of trading activity in financial markets may be considered as background stochastic process responsible for the fractal properties of other financial variables .waiting time in the sequence of trades more likely is double stochastic process , i.e. , poisson process with the stochastic rate defined as a stand - alone stochastic variable .we consider the stochastic rate as continuous one and model it by the stochastic differential equation , exhibiting long - range memory properties .we reconsider previous stochastic point process as continuous process and propose the related nonlinear stochastic differential equation with the same statistical properties .one more elaboration of the model is needed to build up the stochastic process with the statistical properties similar to the empirically defined properties of trading activity in the financial markets .we combine the market response function to the noise as consisting of two different powers : one responsible for the probability distribution function and another responsible for the power spectral density .the proposed new form of the continuous stochastic differential equation enables us to reproduce the main statistical properties of the trading activity and waiting time , observed in the financial markets .more precise model definition enables us to reproduce power spectral density with two different scaling exponents .this provides an evidence that the market behavior is dependant on the level of activity and two stages : calm and excited must be considered .we proposed a very simple model to reproduce the statistical properties of return and volatility .more sophisticated approach has to be elaborated to account for the leverage effect and other specific features of the market .we acknowledge support by the lithuanian state science and studies foundation .
|
earlier we proposed the stochastic point process model , which reproduces a variety of self - affine time series exhibiting power spectral density scaling as power of the frequency and derived a stochastic differential equation with the same long range memory properties . here we present a stochastic differential equation as a dynamical model of the observed memory in the financial time series . the continuous stochastic process reproduces the statistical properties of the trading activity and serves as a background model for the modeling waiting time , return and volatility . empirically observed statistical properties : exponents of the power - law probability distributions and power spectral density of the long - range memory financial variables are reproduced with the same values of few model parameters . _ keywords _ : stochastic processes , scaling in socio - economic systems , models of financial markets
|
we present an algorithmic improvement to the method of siddharth .this prior work presented the _ sentence tracker _ , a method for scoring how well a sentence describes a video clip or alternatively how well a video clip depicts a sentence .this method operates by applying an object detector to each frame of the video clip to detect and localize instances of the nouns in the sentence and stringing these detections together into tracks that satisfy the conditions of the sentence . to compensate for false negatives in the object - detection process , the detectors are biased to overgenerate ; the tracker must then select a single best detection for each noun in the sentence in each frame of the video clip that , when assembled into a collection of tracks , best depicts the sentence .this prior work presented both a cost function for doing this as well as an algorithm for finding the optimum of this cost function . while this algorithm is guaranteed to find the global optimum of this cost function, the space and time needed is exponential in the number of nouns in the sentence , the number of simultaneous objects to track . here ,we present an improved method for optimizing the same cost function .we prove that a relaxed form of the cost function has the same global optimum as the original cost function .we empirically demonstrate that local search on the relaxed cost function finds a local optimum that is qualitatively close to the global optimum .moreover , this local search method takes space that is only linear in the number of nouns in the sentence .each iteration takes time that is also only linear in the number of nouns in the sentence . in practice, the search process converges quickly .this result is important because the sentence tracker , as a scoring function , supports three novel applications : the ability to focus the attention of a tracker with a sentence that describes which actions and associated participants to track in a video clip that depicts multiple such , the ability to generate rich sentential descriptions of video clips with nouns , adjectives , verbs , prepositions , and adverbs , and the ability to search for video clips , in a large video database , that satisfy such rich sentential queries .since the method presented here optimizes the same cost function , it yields essentially identical scoring results , allowing it to apply in a plug - compatible fashion , unchanged , to all three of these applications , allowing them to scale to significantly larger problems .the sentence tracker is based on the _ event tracker _ which is , in turn , based on detection - based tracking .the general idea is to bias an object detector to overgenerate , producing detections , denoted , for each frame in the video clip .each detection has a score , higher scores indicating greater confidence .moreover , there is a measure of temporal coherence between detections in adjacent frames .if is a detection in frame and is a detection in frame , higher values of indicate that the position of relative to is consistent with observed motion in the video , between frames and , such as may be computed with optical flow .detection - based tracking seeks to find a detection index for each frame such that the track , composed of the selected detections , maximizes both the overall detection score and the overall temporal coherence .one way that this can be done is by adopting the following cost function : the advantage of this cost function is that a global optimum can be found in polynomial time with the viterbi algorithm .this is done with dynamic programming on a lattice whose columns are frames and whose rows are detections .the overall time complexity of this approach is and the overall space complexity is , where is the maximal number of detections per frame .the event tracker extends this approach by adding a term to the cost function that measures the degree to which the track depicts an event .events are modeled as hmms .[ eq : tracker ] is analogous to the map estimate for an hmm over a track : where denotes the transition probability from state to , denotes the state in frame , and denotes the probability of generating a detection in state . a global map estimate for the optimal state sequence also be found with the viterbi algorithm , on a lattice whose columns are frames and whose rows are states , in time and space , where is the number of states .the event tracker operates by jointly optimizing the objectives of eqs .[ eq : tracker ] and [ eq : map ] : {l } \left(\displaystyle\sum_{t=1}^tf(b^t_{j^t})\right)+ \left(\displaystyle\sum_{t=2}^tg(b^{t-1}_{j^{t-1}},b^t_{j^t})\right)\\ { } + \left(\displaystyle\sum_{t=1}^th(k^t , b^t_{j^t})\right)+ \left(\displaystyle\sum_{t=2}^ta(k^{t-1},k^t)\right ) \end{array } \label{eq : event}\ ] ] the global optimum of this cost function can also be found with the viterbi algorithm using a lattice whose columns are frames and whose rows are pairs of detections and states in time and space .this finds a track that not only has high scoring detections and is temporally coherent but also exhibits the spatiotemporal characteristics of the event as modeled by the hmm .the sentence tracker forms a factorial event tracker with factors for multiple tracks to represent multiple event participants and multiple hmms to represent the meanings of multiple words in a sentence to mutually constrain the overall spatiotemporal characteristics of a collection of event participants to satisfy the semantics of a sentence .different pairs of participants are constrained by different words in the sentence .for example , the sentence _ the person to the left of the chair carried the backpack away from the traffic cone towards the stool _ mutually constrains the spatiotemporal characteristics of five participants : a * person * , a * chair * , a * backpack * , a * traffic cone * , and a * stool * with the pairwise constituent spatiotemporal relations _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ cols= " < " , ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the sentence tracker operates by optimizing the following cost function : {l } \displaystyle \left[\sum_{l=1}^l \left(\sum_{t=1}^tf(b^t_{j^t_l})\right)+ \left(\sum_{t=2}^tg(b^{t-1}_{j^{t-1}_l},b^t_{j^t_l})\right)\right]\\ \displaystyle { } + \left[\begin{array}{l}\displaystyle\sum_{w=1}^w \begin{array}[t]{@{}l@ { } } \left(\displaystyle\sum_{t=1}^t h_w ( k^t_w , b^t_{j^t_{\theta^1_w}},\ldots , b^t_{j^t_{\theta^{i_w}_w}})\right)\\ { } + \left(\displaystyle\sum_{t=2}^ta_w(k^{t-1}_w , k^t_w)\right ) \end{array}\end{array}\right ] \end{array } \label{eq : sentence}\ ] ] where there are event participants constrained by content words , denotes the track collection , and denotes the state - sequence collection . in the above , the hmm output model is generalized to take more than one detection as input .this is to model spatiotemporal relations of arity between multiple participants ( typically two ) .the _ linking function _ specifies which participants apply in which order and is derived by syntactic and semantic analysis of the sentence . while a global optimum to this cost functioncan still be found with the viterbi algorithm , using a lattice whose columns are frames and whose rows are tuples containing detections and states , the time complexity of such is and the space complexity is .thus both the space and time complexity is exponential in and , values which increase linearly with the number of nouns in the sentence .{l } \left[\begin{array}{l}\displaystyle\sum_{l=1}^l \begin{array}[t]{l } \left(\displaystyle\sum_{t=1}^t \displaystyle\sum_{j=1}^{j^t } p^{t , l}_j f(b^t_j)\right)\\ { } + \left(\displaystyle\sum_{t=2}^t \displaystyle\sum_{j'=1}^{j^{t-1 } } \displaystyle\sum_{j=1}^{j^t } p^{t-1,l}_{j ' } p^{t , l}_j g(b^{t-1}_{j'},b^t_j)\right)\\ \end{array}\end{array}\right]\\ { } + \left[\begin{array}{l}\displaystyle\sum_{w=1}^w \begin{array}[t]{l } \left(\displaystyle\sum_{t=1}^t \displaystyle\sum_{j_1=1}^{j^t}\cdots \displaystyle\sum_{j_{i_w}=1}^{j^t } \displaystyle\sum_{k=1}^k q^{t , w}_k p^{t,{\theta^1_w}}\cdotsp^{t,{\theta^{i_w}_w}}_j h_w(k , b^t_{j_1},\ldots , b^t_{j_{i_w}})\right)\\ { } + \left(\displaystyle\sum_{t=2}^t \displaystyle\sum_{k'=1}^{k_w } \displaystyle\sum_{k=1}^{k_w } q^{t-1,w}_{k ' } q^{t , w}_k a_w(k',k)\right ) \end{array}\end{array}\right ] \end{array } \label{eq : sentence - relax}\ ] ] we reformulate eqs . [ eq : tracker ] , [ eq : map ] , and [ eq : event ] using indicator variables instead of indices . instead of using to indicate the index of a detection in frame , we use as an indicator variable , which is zero for all indices , except the index of the selected detection , for which it is one .this allows eq .[ eq : tracker ] to be reformulated as : {l } \left(\displaystyle\sum_{t=1}^t \displaystyle\sum_{j=1}^{j^t } p^t_j f(b^t_j)\right)\\ { } + \left(\displaystyle\sum_{t=2}^t \displaystyle\sum_{j'=1}^{j^{t-1 } } \displaystyle\sum_{j=1}^{j^t } p^{t-1}_{j ' } p^t_j g(b^{t-1}_{j'},b^t_j)\right ) \end{array } \label{eq : tracker - relax}\ ] ] similarly , instead of using to indicate the state in frame , we use as an indicator variable , which is zero for all states , except that which is selected in frame , for which it is one .this allows eq .[ eq : map ] to be reformulated as : {l } \left(\displaystyle\sum_{t=1}^t \displaystyle\sum_{k=1}^k q^t_k h(k , b^t_{\hat{\jmath}^t}\right)\\ { } + \left(\displaystyle\sum_{t=2}^t \displaystyle\sum_{k'=1}^k \displaystyle\sum_{k=1}^k q^{t-1}_{k ' } q^t_ka(k',k)\right ) \end{array } \label{eq : map - relax}\ ] ] combining the two allows eq .[ eq : event ] to be reformulated as : {l } \left(\displaystyle\sum_{t=1}^t \displaystyle\sum_{j=1}^{j^t } p^t_j f(b^t_j)\right)\\ { } + \left(\displaystyle\sum_{t=2}^t \displaystyle\sum_{j'=1}^{j^{t-1 } } \displaystyle\sum_{j=1}^{j^t } p^{t-1}_{j ' } p^t_j g(b^{t-1}_{j'},b^t_j)\right)\\ { } + \left(\displaystyle\sum_{t=1}^t \displaystyle\sum_{j=1}^{j^t } \displaystyle\sum_{k=1}^k q^t_k p^t_j h(k , b^t_j)\right)\\ { } + \left(\displaystyle\sum_{t=2}^t \displaystyle\sum_{k'=1}^k \displaystyle\sum_{k=1}^k q^{t-1}_{k ' } q^t_k a(k',k)\right ) \end{array } \label{eq : event - relax}\ ] ] in the above , and denote the collections of the indicator variables and respectively .one can view the indicator variables and to be constrained to be in and to satisfy the sum - to - one constraints and . under these constraints , it is obvious that the formulations underlying eqs .[ eq : tracker ] and [ eq : tracker - relax ] , as well as eqs .[ eq : map ] and [ eq : map - relax ] , are identical .one can further apply a similar transformation to eq .[ eq : sentence ] to get eq .[ eq : sentence - relax ] , where the indicator variables are further indexed by the track and the indicator variables are further indexed by the word . and denote the collections of all indicator variables and respectively .similarly , it is again clear that the formulations underlying eqs .[ eq : sentence ] and [ eq : sentence - relax ] are identical .note that because is generalized to take more than one detection as input , the objective in eq .[ eq : sentence - relax ] becomes a multivariate polynomial of degree , where is the maximum of all of the arities of all of the words , instead of a quadratic form as in eqs .[ eq : tracker - relax ] , [ eq : map - relax ] , and [ eq : event - relax ] . eqs .[ eq : sentence - relax ] , [ eq : tracker - relax ] , [ eq : map - relax ] , and [ eq : event - relax ] , with unknowns taking binary values , are binary integer - programming problems ( linearly constrained in our case ) and are difficult to optimize with a large number of unknowns .thus we solve relaxed variants of these problems , taking the domain of the indicator variables to be ] .we prove this below .this allows us to introduce a local - search technique to solve the relaxed problems .eq . [ eq : sentence - relax ] has the following general form : {l } \displaystyle\max_{\mathbf{x } } \sum_{n=1}^n\hspace*{-3pt}\left(\hspace*{-3pt } \mathop{\sum_{v_{u_1}\in v_{u_1}}\hspace*{-7pt}\cdots\hspace*{-7pt}\sum_{v_{u_n}\in v_{u_n } } } \limits_{\{u_1,\ldots , u_n\}\in\mathcal{e}_n } \hspace*{-8pt}\phi(v_{u_1},\ldots , v_{u_n } ) x^{u_1}_{v_{u_1}}\cdots x^{u_n}_{v_{u_n } } \hspace*{-5pt}\right)\\ \begin{aligned } { \textrm{s.t.,}}\ & ( \forall u)(\forall v_u)x_{v_u}^u\ge0\\ & ( \forall u)\sum_{v_u\in v_u } x^u_{v_u}=1\\ \end{aligned}\\ \end{array } } \label{eq : score - general}\ ] ] where are ( indicator ) variables that form a polynomial objective , are the coefficients of the terms in this polynomial , and we sum over all terms of the varying degree .this polynomial can be viewed as denoting the overall compatibility of a labeling of an undirected hypergraph with vertices , labeled from a set of labels , where each term denotes the compatibility of a possible labeling configuration for the vertices of some hyperedge . denotes the set of all hyperedges of size .the inner nested summation over the indices sums over all possible labeling configurations for a particular hyperedge . summing this overall elements of sums over all hyperedges of size .the overall polynomial sums over all hyperedges of size . in the limiting case of maximal arity two, the hypergraph becomes an ordinary undirected graph and the hyperedges become ordinary undirected edges .the collection of variables can be viewed as a discrete distribution over possible labels for vertex .the coefficient associated with each hyperedge measures the compatibility of the labels for the constituent vertices .this hypergraph can be viewed as a high - order markov random field ( mrf ) .the common case of an mrf is when .ravikumar and lafferty showed that , for this case , the global optima of eq .[ eq : score - general ] are the same whether the domains of the variables are discrete or continuous , or ] ._ let the optimal value before relaxation be and the optimal value after relaxation be .it is obvious that since ] .moreover , we have presented a gt method for performing local search for local optima of eqs .[ eq : tracker - relax ] , [ eq : map - relax ] , [ eq : event - relax ] , and [ eq : sentence - relax ] .we now present empirical evidence that the local optima produced by gt are qualitatively identical to the global optima . when performing gt , we employed the following procedure .we randomly initialized the local search at 150 different label distributions .for each one , we ran 300 iterations of eq .[ eq : update ] .we then selected the resulting label distribution that corresponded to the highest objective and ran 5000 additional iterations on this label distribution to yield the resulting label distribution and objective . performing 150 restartsmay only be necessary for problems with a large number of variables . nonetheless , for simplicity , we employed this uniform number of restarts for all experiments .we evaluated the gt method on the same dataset that was used by yu and siskind .this dataset contains 94 video clips , each with between two and five sentential annotations of activities that occur in the corresponding clip .all but one of these clips contain at least one annotated sentence with exactly three content words : a subject , a verb , and a direct object .we did not process the single clip that does not .for the remaining 93 video clips , we selected the first annotated sentence with exactly three content words .for each such video - sentence pair , we computed the global optimum to eq .[ eq : sentence ] with the viterbi algorithm using the same features and hand - crafted word hmms ( for , for extra states were added ) as used by yu and siskind . for each video - sentence pair, we also computed a local optimum to eq .[ eq : sentence - relax ] with gt using these same features and hmms .we then computed relative error ( percentage ) between the global optimum computed by the viterbi algorithm and local optimum computed by gt for each video - sentence pair , and averaged over all 93 pairs .we repeated this three times : the average relative errors for , , and , are 2.22% , 2.79% , and 5.08% , respectively .[ fig : histogram ] gives histograms of the number of videos with given ranges of relative error .[ fig : histogram ] we limited the above experiment to sentences with no more than three content words because we wished to compare against the global optimum which can only be computed with the viterbi algorithm .this process may become intractable with sentences with more than three content words , especially with large . to demonstrate that the local - search approach can scale to process longer sentences we performed a second experiment . in this experiment , we processed six video clips with longer sentences .[ fig : videos ] illustrates the tracks produced for these video - sentence pairs , as derived from the indicator variables .note that for the example in the fourth row in fig .[ fig : videos ] , computing the global optimum with the viterbi algorithm would take lattice comparisons .we estimate that this would take about 20 years on a current computer .@ & & & & & + + + & & & & & + + + & & & & & + + + & & & & & + + + & & & & & + + + & & & & & + + + [ fig : videos ]lin present an approach for searching a database of video clips for those that match a sentential query .implicit in this is a function that scores how well a video clip depicts a sentence or alternatively how well a sentence describes a video clip .that work differs from the current work in several ways .first , they run a tracker in isolation , independently for each object , before scoring a video - sentence pair . here , we jointly perform tracking and scoring , and do so jointly for all tracks described by a sentence . this allows scoring to influence tracking and tracking one object to influence tracking other objects .second , their scoring function composes only unary primitive predicates , each applied to a single track .here , our scoring function composes multivariate primitive predicates , each applied to multiple tracks .this is what links the word meanings together and allows the entire sentential semantics to influence the joint tracking of all participants .third , the essence of their recognition process ( not training ) is that they automatically find what they denote as .this maps tracks to ( unary ) arguments of primitive predicates . here, we solve a dual problem of automatically finding what we denote as , a map from event participants to detections .fourth , they allow predicates to be assigned to a dummy track `` no - obj '' which allows them to ignore portions of the sentence . here , we constrain the track collection to satisfy all primitive predicates contained in a sentence .fifth , they adopt a `` no coreference '' constraint that requires that each nondummy track be assigned to a different primitive predicate . here, we allow such .this allows us to process sentences like _ the person approached the backpack to the left of the chair _ that contain , in part , primitive predicates like where the track is assigned to two different primitive predicates .sixth , they do not model temporally - varying features in their primitive predicates . here, we do so with hmms .the relaxed sentence tracker also shares some similarity in spirit with current research in image / video object co - detection / discovery based on object proposals .the goal of this line of work is to find instances of a common object across a set of different images or video frames , given a pool of object candidates generated by measuring the ` objectness ' of image windows in each image .this is done by associating a unary cost with each candidate to represent the confidence that that candidate is an object , and a binary cost between pairs of candidates to measure their similarity in appearance , resulting in a second - order mrf / crf .selecting the best candidate for each image constitutes a map estimate on this random field , and is usually relaxed to constrained nonlinear programming , or computed by a combinatorial optimization technique such as tree - reweighted message passing .except for its loopy graph structure , this formulation is analogous to detection - based tracking in eq .[ eq : tracker - relax ] .in contrast to the work presented here , prior work on object co - detection / discovery only exploits visual appearance analogous to the term in eq .[ eq : tracker - relax]and object - pair similarity analogous to the term in eq .[ eq : tracker - relax]but not the degree to which an object track exhibits particular spatiotemporal behavior , as is done by the terms and in the event tracker , eq .[ eq : event - relax ] , let alone the joint detection / discovery of multiple objects that exhibit the collective spatiotemporal behavior described by a sentence , as is done in the sentence tracker , eq .[ eq : sentence - relax ] .siddharth , barbu , and yu and siskind present a variety of applications for the sentence tracker . since the sentence tracker is simply a scoring function that scores a video - sentence pair , one can use it for the following applications : given a single video clip as input , that contains two different activities taking place with different subsets of participants , along with two different sentences that each describe these different activities , produce two different track collections that delineate the participants in the two different activities . given a video clip as input ,produce a sentence as output that best describes the video clip by searching the space of all possible sentences to find the one with the highest score on the input video clip . given a sentential query as input ,search a dataset of video clips to find that clip that best depicts the query by searching all clips to find the one with the highest score on the input query .given a training corpus of video clips paired with sentences that described these clips , search the parameter space of the models for the words in the lexicon that yields the highest aggregate score on the training corpus .with the exception of the last use case , language acquisition , the other use cases all treat the sentence tracker as a black box .we have demonstrated a plug - compatible black box that takes the same input in the form of a video - sentence pair and produces the same output in the form of a score and a track collection .since we have proven that the global optimum to the relaxed objective is the same as that for the original objective , and further empirically demonstrated that local search tends to find a local optimum that is qualitatively the same as the global one , one can employ this relaxed method to perform exactly the same first three uses cases as the original sentence tracker with identical , or nearly identical results .the chief advantage of the relaxed method is that it scales to longer sentences .more precisely , we demonstrate three distinct kinds of scaling : because the original sentence tracker had space complexity of and time complexity it was , in practice , limited to small , typically at most 20 . here , we perform experiments that demonstrate scaling to . similarly , the original sentence tracker was limited , in practice , to small , typically at most 3 . here , we perform experiments that demonstrate scaling to .is , in fact , determined by the number of frames , which is 15 , on average , in our experiments ] similarly , the original sentence tracker was limited , in practice , to small and , typically , at most three nouns in the sentence , and typically at most two words that have more than one state in their hmm .( words that describe static properties , like nouns , adjectives , and spatial - relation prepositions , typically have a single - state hmm while words that describe dynamic properties , like verbs , adverbs , and motion prepositions , typically have multiple states . ) here , we perform experiments that demonstrate scaling to and .since the method of yu and siskind for the fourth use case , namely language acquisition , does not use the sentence tracker as a black box , extending our new method to this use case is beyond the scope of this current work .the sentence tracker has previously been demonstrated to be a powerful framework for a variety of applications , both theoretical and potentially practical . it can serve as a model of grounded child language acquisition as well as the basis for searching full - length hollywood movies for clips that match rich sentential queries in a way that is sensitive to subtle semantic distinctions in the queries .however , until now , it was impractical to apply in many situations that require scaling to complex sentences with many event participants .our results remove that barrier .this research was sponsored , in part , by the army research laboratory and was accomplished under cooperative agreement number w911nf-10 - 2 - 0060 . the views and conclusions contained in this documentare those of the authors and should not be interpreted as representing the official policies , either express or implied , of the army research laboratory or the u.s .government is authorized to reproduce and distribute reprints for government purposes , notwithstanding any copyright notation herein .
|
prior work presented the sentence tracker , a method for scoring how well a sentence describes a video clip or alternatively how well a video clip depicts a sentence . we present an improved method for optimizing the same cost function employed by this prior work , reducing the space complexity from exponential in the sentence length to polynomial , as well as producing a qualitatively identical result in time polynomial in the sentence length instead of exponential . since this new method is plug - compatible with the prior method , it can be used for the same applications : video retrieval with sentential queries , generating sentential descriptions of video clips , and focusing the attention of a tracker with a sentence , while allowing these applications to scale with significantly larger numbers of object detections , word meanings modeled with hmms with significantly larger numbers of states , and significantly longer sentences , with no appreciable degradation in quality of results .
|
this paper is a contribution to the methodology of bayesian variable selection for linear gaussian regression models , an important problem which has been much discussed both from a theoretical and a practical perspective ( see chipman _ et al ._ , 2001 and clyde and george , 2004 for literature reviews ) .recent advances have been made in two directions , unravelling the theoretical properties of different choices of prior structure for the regression coefficients ( fernndez _ et al ._ , 2001 ; liang _ et al ._ , 2008 ) and proposing algorithms that can explore the huge model space consisting of all the possible subsets when there are a large number of covariates , using either mcmc or other search algorithms ( kohn _ et al . _ , 2001 ; dellaportas _ et al ._ , 2002 ; hans _ et al ._ , 2007 ) . in this paper, we propose a new sampling algorithm for implementing the variable selection model , based on tailoring ideas from evolutionary monte carlo ( liang and wong , 2000 ; jasra _ et al . _ , 2007 ; wilson _ et al ._ , 2009 ) in order to overcome the known difficulties that mcmc samplers face in a high dimension multimodal model space : enumerating the model space becomes rapidly unfeasible even for a moderate number of covariates . for a bayesian approach to be operational, it needs to be accompanied by an algorithm that samples the indicators of the selected subsets of covariates , together with any other parameters that have not been integrated out .our new algorithm for searching through the model space has many generic features that are of interest _ per se _ and can be easily coupled with any prior formulation for the variance - covariance of the regression coefficients .we illustrate this by implementing -priors for the regression coefficients as well as independent priors : in both cases the formulation we adopt is general and allows the specification of a further level of hierarchy on the priors for the regression coefficients , if so desired .the paper is structured as follows . in section [ background ] , we present the background of bayesian variable selection , reviewing briefly alternative prior specifications for the regression coefficients , namely -priors and independent priors .section [ mcmc sampler ] is devoted to the description of our mcmc sampler which uses a wide portfolio of moves , including two proposed new ones .section [ performance_ess ] demonstrates the good performance of our new mcmc algorithm in a variety of real and simulated examples with different structures on the predictors . in section [ simulation study ]we complement the results of the simulation study by comparing our algorithm with the recent shotgun stochastic search algorithm of hans _ et al . _finally section [ discussion ] contains some concluding remarks .let be a sequence of observed responses and a vector of predictors for , , of dimension .moreover let be the design matrix with row .a gaussian linear model can be described by the equation where is an unknown constant , is a column vector of ones , is a vector of unknown parameters and .suppose one wants to model the relationship between and a subset of , but there is uncertainty about which subset to use .following the usual convention of only considering models that have the intercept , this problem , known as variable selection or subset selection , is particularly interesting when is large and parsimonious models containing only a few predictors are sought to gain interpretability . from a bayesian perspectivethe problem is tackled by placing a constant prior density on and a prior on which depends on a latent binary vector , where if and if , .the overall number of possible models defined through grows exponentially with and selecting the best model that predicts is equivalent to find one over the subsets that form the model space .given the latent variable , a gaussian linear model can therefore be written as where is the non - zero vector of coefficients extracted from , is the design matrix of dimension , , with columns corresponding to .we will assume that , apart from the intercept , contains no variables that would be included in every possible model and that the columns of the design matrix have all been centred with mean .it is recommended to treat the intercept separately and assign it a constant prior : , fernndez _ et al . _when coupled with the latent variable , the conjugate prior structure of follows a normal - inverse - gamma distribution with .some guidelines on how to fix the value of the hyperparameters and are provided in kohn _( 2001 ) , while the case corresponds to the jeffreys prior for the error variance , .taking into account ( [ t1 ] ) , ( [ t2 ] ) , ( [ t3 ] ) and the prior specification for , the joint distribution of all the variables ( based on further conditional independence conditions ) can be written as the main advantage of the conjugate structure ( [ t2 ] ) and ( [ t3 ] ) is the analytical tractability of the marginal likelihood whatever the specification of the prior covariance matrix : where , with , and ( brown _ et al ._ , 1998 ) .while the mean of the prior ( [ t2 ] ) is usually set equal to zero , , a neutral choice ( chipman _ et al ._ , 2001 ; clyde and george , 2004 ) , the specification of the prior covariance matrix leads to at least two different classes of priors : * when , where is a scalar and , it replicates the covariance structure of the likelihood giving rise to so called -priors first proposed by zellner ( 1986 ) . * when , but the components of are conditionally independent and the posterior covariance matrix is driven towards the independence case .we will adopt the notation as we want to cover both prior specification in a unified manner .thus in the -prior case , while in the independent case , . we will refer to as the _ variable selection coefficient _ for reasons that will become clear in the next section . to complete the prior specification in ( [ t4 ] ) , must be defined .a complete discussion about alternative priors on the model space can be found in chipman ( 1996 ) and chipman _ et al . _( 2001 ) . herewe adopt the beta - binomial prior illustrated in kohn _( 2001 ) with , where the choice implicitly induces a binomial prior distribution over the model size and .the hypercoefficients and can be chosen once and have been elicited . in the large , small framework , to ensure sparse regression models where , it is recommended to centre the prior for the model size away from the number of observations . it is a known fact that -priors have two attractive properties .firstly they possess an automatic scaling feature ( chipman _ et al . _ , 2001 ; kohn _ et al ._ , 2001 ) .in contrast , for independent priors , the effect of on the posterior distribution depends on the relative scale of and standardisation of the design matrix to units of standard deviation is recommended .however , this is not always the best procedure when is possibly skewed , or when the columns of are not defined on a common scale of measurement .the second feature that makes -priors particularly appealing is the rather simple structure of the marginal likelihood ( [ t5 ] ) with respect to the constant which becomes where , if , . for computational reasons explained in the next section, we assume that ( [ t6 ] ) is always defined : since we calculate using the qr - decomposition of the regression ( brown _ et al ._ , 1998 ) , when , .despite the simplicity of ( [ t6 ] ) , the choice of the constant for -priors is complex , see fernndez _( 2001 ) , cui and george ( 2008 ) and liang _ et al . _( 2008 ) .historically the first attempt to build a comprehensive bayesian analysis placing a prior distribution on dates back to zellner and siow ( 1980 ) , where the data adaptivity of the degree of shrinkage adapts to different scenarios better than assuming standard fixed values .zellner - siow priors , z - s hereafter , can be thought as a mixture of -priors and an inverse - gamma prior on , , leading to liang _ et al . _( 2008 ) analyse in details z - s priors pointing out a variety of theoretical properties . from a computational point of view , with z - s priors , the marginal likelihood is no more available in closed form , something which is advantageous in order to quickly perform a stochastic search ( chipman _ et al . _ , 2001 ) .even though z - s priors need no calibration and the laplace approximation can be derived ( tierney and kadane , 1986 ) , see appendix [ laplace_approx ] , never became as popular as -priors with a suitable constant value for .for alternative priors , see also cui and george ( 2008 ) and liang _ et al . _( 2008 ) .when all the variables are defined on the same scale , independent priors represent an attractive alternative to -priors .the likelihood marginalised over , and becomes where , if , .note that ( [ t13 ] ) is computationally more demanding than ( [ t6 ] ) due to the extra determinant operator .geweke ( 1996 ) suggests to fix a different value of , , based on the idea of substantially significant determinant of with respect to .however it is common practice to standardise the predictor variables , taking in order to place appropriate prior mass on reasonable values of the regression coefficients ( hans _ et al ._ , 2007 ) .another approach , illustrated in bae and mallick ( 2004 ) , places a prior distribution on without standardising the predictors .regardless of the prior specification for , using the qr - decomposition on a suitable transformation of and , the marginal likelihood ( [ t13 ] ) is always defined .in this section we propose a new sampling algorithm that overcomes the known difficulties faced by mcmc schemes when attempting to sample a high dimension multimodal space .we discuss in a unified manner the general case where a hyperprior on the variable selection coefficient is specified .this encompasses the -prior and independent prior structure as well as the case of fixed if a point mass prior is used .the multimodality of the model space is a known issue in variable selection and several ways to tackle this problem have been proposed in the past few years .liang and wong ( 2000 ) suggest an extension of parallel tempering called evolutionary monte carlo , emc hereafter , nott and green , n&g hereafter , ( 2004 ) introduce a sampling scheme inspired by the swendsen - wang algorithm while jasra _( 2007 ) extend emc methods to varying dimension algorithms .finally hans _ et al . _( 2007 ) propose when a new stochastic search algorithm , sss , to explore models that are in the same neighbourhood in order to quickly find the best combination of predictors .we propose to solve the issue related to the multimodality of model space ( and the dependence between and ) along the lines of emc , applying some suitable parallel tempering strategies directly on .the basic idea of parallel tempering , pt hereafter , is to weaken the dependence of a function from its parameters by adding an extra one called temperature .multiple markov chains , called population of chains , are run in parallel , where a different temperature is attached to each chain , their state is tentatively swap at every sweep by a probabilistic mechanism and the latent binary vector of the non - heated chain is recorded .the different temperatures have the effect of flatting the likelihood .this ensures that the posterior distribution is not trapped in any local mode and that the algorithm mixes efficiently , since every chain constantly tries to transmit information about its state to the others .emc extents this idea , encompassing the positive features of pt and genetic algorithms inside a mcmc scheme .since and are integrated out , only two parameters need to be sampled , namely the latent binary vector and the variable selection coefficient . in this set - upthe full conditionals to be considered are ^{1/t_{l}}\propto \left [ p\left ( y\left\vert \gamma _ { l},\tau \right .\right ) \right ] ^{1/t_{l}}\left [ p\left ( \gamma _ { l}\right ) \right ] ^{1/t_{l } } \label{t23bis}\ ] ] ^{1/t_{l}}p\left ( \tau \right ) , \label{t24bis}\ ] ] where is the number of chains in the the population and , , is the temperature attached to the chain while the population corresponds to a set of chains that are retained simultaneously .conditions for convergence of emc algorithms are well understood and illustrated for instance in jasra _( 2007 ) . at each sweep of our algorithm ,first the population in ( [ t23bis ] ) is updated using a variety of moves inspired by genetic algorithms : local moves , the ordinary metropolis - hastings or gibbs update on every chain ; and global moves that include : i ) selection of the chains to swap , based on some probabilistic measures of distance between them ; ii ) crossover operator , i.e. partial swap of the current state between different chains ; iii ) exchange operator , full state swap between chains .both local and global moves are important although global moves are crucial because they allow the algorithm to jump from one local mode to another . at the end of the update of , then sampled using ( [ t24bis ] ) .the implementation of emc that we propose in this paper includes several novel aspects : the use of a wide range of moves including two new ones , a local move , based on the fast scan metropolis - hastings sampler , particularly suitable when is large and a bold global move that exploits the pattern of correlation of the predictors .moreover , we developed an efficient scheme for tuning the temperature placement that capitalises the effective interchange between the chains .another new feature is to use a metropolis - within - gibbs with adaptive proposal for updating , as the full conditional ( [ t24bis ] ) is not available in closed form . in what follows, we will only sketch the rationale behind all the moves that we found useful to implement and discuss further the benefits of the new specific moves in section [ real data examples ] .for the large , small and complex predictor spaces , we believe that using a wide portfolio of moves is needed and offers better guarantee of mixing . from a notational point of view , we will use the double indexing , and to denote the latent binary indicator in the chain .moreover we indicate by the vector of binary indicators that characterise the state of the chain of the population . given , we first tried the simple mc idea of madigan and york ( 1995 ) , also used by brown _( 1998 ) where add / delete and swap moves are used to update the latent binary vector . for anadd / delete move , one of the variables is selected at random and if the latent binary value is the proposed new value is or _vice versa_. however , when , where is the size of the current model for the chain , the number of sweeps required to select by chance a binary indicator with a value of follows a geometric distribution with probability which is much smaller than to select a binary indicator with a value of .hence , the algorithm spends most of the time trying to add rather than delete a variable . note that this problem also affects rj - type algorithms ( dellaportas _ et al ._ , 2002 ) .on the other hand , gibbs sampling ( george and mcculloch , g&mcc hereafter , 1993 ) is not affected by this issue since the state of the chain is updated by sampling from ^{1/t_{l}}\propto \exp \left\ { \left ( \log p\left ( y\left\vert \gamma _ { l , j}^{\left ( 1\right ) } , \tau \right .\right ) + \log p\left ( \gamma _ { l , j}=1\left\vert \gamma _ { l , j^{-}}\right .\right ) \right ) /t_{l}\right\ } , \label{t34}\ ] ] where indicates for the chain all the variables , but the , and + .the main problem related to gibbs sampling is the large number of models it evaluates if a full gibbs cycle or any permutation of the indices is implemented at each sweep .each model requires the direct evaluation , or at least the update , of the time consuming quantity , equation ( [ t6 ] ) or ( [ t13 ] ) , making practically impossible to rely solely on the gibbs sampler when is very large .however , as sharply noticed by kohn _( 2001 ) , it is wasteful to evaluate all the updates in a cycle because if is much smaller than and given , it is likely that the sampled value of is again .when is large , we thus consider instead of the standard mc add / delete , swap moves , a novel fast scan metropolis - hastings scheme , fsmh hereafter , specialised for emc / pt .it is computationally less demanding than a full gibbs sampling on all and do not suffer from the problem highlighted before for mc and rj - type algorithms when .the idea behind the fsmh move is to use an additional acceptance / rejection step ( which is very fast to evaluate ) to choose the number of indices where to perform the gibbs - like step : the novelty of our fsmh sampler is that the additional probability used in the acceptance / rejection step is based not only on the current chain model size , but also on the temperature attached to the chain .therefore the aim is to save computational time in the large set - up when multiple chains are simulated in parallel and finding an alternative scheme to a full gibbs sampler . to save computational timeour strategy is to evaluate the time consuming marginal likelihood ( [ t5 ] ) in no more than approximately times per cycle in the chain ( assuming convergence is reached ) , where is the probability to select a variable to be added in the acceptance / rejection step which depends on the current model size and the temperature and similarly for ( indicates the integer part ) . since for chains attached to lower temperatures , the algorithm proposes to update _ almost all _ binary indicators with value , while it selects at random a group of approximately binary indicators with value 0 to be updated . at higher temperatures since and become more similar , the number of models evaluated in a cycle increases because much more binary indicators with value are updated .full details of the fsmh scheme is given in the appendix [ fsmh scheme ] , while evaluation of them and comparison with mc embedded in emc are presented in sections [ real data examples ] and [ simulation study ] the first step of this move consists of selecting the pair of chains to be operated on .we firstly compute a probability equal to the weight of the boltzmann probability , , where is the log transformation of the full conditional ( [ t23bis ] ) assuming , , and for some specific temperature , and then rank all the chains according to this .we use normalised boltzmann weights to increase the chance that the two selected chains will give rise , after the crossover , to a new configuration of the population with higher posterior probability .we refer to this first step as selection operator .suppose that two new latent binary vectors are then generated from the selected chains according to some crossover operator described below .the new proposed population of chains + is accepted with probability where is the proposal probability , see liang and wong ( 2000 ) . in the followingwe will assume that four different crossover operators are selected at random at every emc sweep : -point crossover , uniform crossover , adaptive crossover ( liang and wong , 2000 ) and a novel block crossover .of these four moves , the uniform crossover which shuffles the binary indicators along all the selected chains is expected to have a low acceptance , but to be able to genuinely traverse regions of low posterior probability .the block crossover essentially tries to swap a group of variables that are highly correlated and can be seen as a multi - points crossover whose crossover points are not random but defined from the correlation structure of the covariates . in practicethe block crossover is defined as follows : one variable is selected at random with probability , then the pairwise correlation between the selected predictor and each of the remaining covariates , , , is calculated .we then retain for the block crossover all the covariates with positive ( negative ) pairwise correlation with such that .the threshold is chosen with consideration to the specific problem , but we fixed it at .evaluation of block crossover and comparisons with other crossover operators are presented on a real data example in section [ real data examples ] .the exchange operator can be seen as an extreme case of crossover operator , where the first proposed chain receives the whole second chain state , and _vice versa_. in order to achieve a good acceptance rate , the exchange operator is usually applied on adjacent chains in the temperature ladder , which limits its capacity for mixing . to obtain better mixing, we implemented two different approaches : the first one is based on jasra _ et al . _( 2007 ) and the related idea of delayed rejection ( green and mira , 2001 ) ; the second , a bolder all - exchange move , is based on a precalculation of all the exchange acceptance rates between all chains pairs ( calvo , 2005 ) .full relevant details are presented in appendix [ exchange ] .both of these bold moves perform well in the real data applications , see section [ real data examples ] , and simulated examples , see section [ simulation study ] , thus contributing to the efficiency of the algorithm .as noted by goswami and liu ( 2007 ) , the placement of the temperature ladder is the most important ingredient in population based mcmc methods .we propose a procedure for the temperature placement which has the advantage of simplicity while preserving good accuracy .first of all , we fix the size of the population . in doing this ,we are guided by several considerations : the complexity of the problem , i.e. , the size of the data and computational limits .we have experimented and we recommend to fix . even though some of the simulated examples had ( section [ simulation study ] ) , we found that was sufficient to obtain good results . in our real data examples ( section [ real data examples ] ) , we used guided by some prior knowledge on .secondly , we fix at an initial stage , a temperature ladder according to a geometric scale such that , , with relatively large , for instance .to subsequently tune the temperature ladder , we then adopt a strategy based on monitoring only the acceptance rate of the delayed rejection exchange operator towards a target of .details of the implementation are left in appendix [ temperature ] various strategies can be used to avoid having to sample from the posterior distribution of the variable selection coefficient .the easiest way is to integrate it out through a laplace approximation ( tierney and kadane , 1986 ) or using a numerical integration such as quadrature on an infinite interval .we do not pursue these strategies and the reasons can be summarised as follows . integrating out in the populationimplicitly assumes that every chain has its own value of the variable selection coefficient ( and of the latent binary vector ) . in this set - up, two unpleasant situations can arise : firstly , if a laplace approximation is applied , _ equilibrium _ in the product space is difficult to reach because the posterior distribution of depends , through the marginal likelihood obtained using the laplace approximation , on the _ chain specific value _ of the posterior mode for , ( details in appendix [ laplace_approx ] ) .since the strength of to predict the response is weakened for chains attached to high temperatures , it turns out that for these chains , is likely to be close to zero .when the variable selection coefficient is very small , the marginal likelihood dependence on decreases even further , see for instance ( [ t6 ] ) , and chains attached to high temperatures will experience a very unstable behaviour , making the convergence in the product space hard to reach .in addition , if an automatic tuning of temperature ladder is applied , chains will increasingly be placed at a closer distance in the temperature ladder to balance the low acceptance rate of the global moves , negating the purpose of emc . in this paper the convergenceis reached instead in the product space ^{1/t_{l}}p\left ( \tau \right ) ] , by visual inspection we see that each chain _ marginally _ reaches its _ equilibrium _ with respect to the others ; moreover , thanks to the automatic tuning of the temperature placement during the burn - in , the distributions of the chains log posterior probabilities overlap nicely , allowing effective exchange of information between the chains .table [ table_t1]eqtl , confirms that the automatic temperature selection works well ( with and without the hyperprior on ) reaching an acceptance rate for the monitored exchange ( delayed rejection ) operator close to the selected target of . the all - exchange operator shows a higher acceptance rate , while , in contrast to jasra _( 2007 ) , the overall crossover acceptance rate is reasonable high : in our experience the good performance of the crossover operator is both related to the selection operator ( section [ emc sampler ] ) and the new block crossover which shows an acceptance rate far higher than the others .finally the computational time on the same desktop computer ( see details in appendix [ performance_comparison ] ) is rather similar with or without the hyperprior , and minutes respectively for sweeps with as burn - in .the main difference among the two implementations of ess is related to the posterior model size : when is fixed at ( unit information prior , fernndez _ et al ._ , 2001 ) , there is more uncertainty and support for larger models , see figure [ fig_t2 ] ( a ) . in both cases we fix and , following prior biological knowledge on the genetic regulation .the posterior mean of the variable selection coefficient is a little smaller than the unit information prior , with ess coupled with the z - s prior favouring smaller models than when is set equal to .the best model visited ( and the corresponding ) is the same for both version of ess , while , when a hyperprior on is implemented , the stability index which indicates how much the algorithm persists on the first chain top ( not unique ) visited models ranked by the posterior probability ( appendix [ performance_comparison ] ) , shows a higher stability , see table [ table_t1] eqtl . in this case , having a data - driven level of shrinkage helps the search algorithm to better discriminate among competing models .our second example is related to the application of model ( [ t1 ] ) in another genomics example : snps , selected genome - wide from a candidate gene study , are used to predict the variation of mass spectography metabolomics data in a small human population , an example of a so - called mqtl experiment. a suitable dimension reduction of the data is performed to divide the spectra in regions or bins and -transformation is applied in order to normalise the signal .we present the key findings related to a particular metabolite bin , but the same comments can be extended to the analysis of the whole data set , where we regressed every metabolites bin _ versus _ the genotype data ( and ) . in this very challenging case , we still found an efficient mixing of the chains ( see table [ table_t1]mqtl ) .note that in this case the posterior mean of , , is a little larger than the unit information prior , , although the influence of the hyperprior is less important than in the previous real data example , see figure [ fig_t2 ] ( b ) .in both examples , the posterior model size favours clearly polygenic control with significant support for up to four genetic control points ( figure [ fig_t2 ] ) highlighting the advantage of performing multivariate analysis in genomics rather than the traditional univariate analysis . as expected in view of the very large number of predictors ,in the mqtl example the computational time is quite large , around hours for sweeps after a burn - in of , but as shown in table [ table_t1 ] by the stability index ( ) , we believe that the number of iterations chosen exceeds what is required in order to visit faithfully the model space . for such large data analysis tasks , parallelisation of the code could provide big gains of computer time and would be ideally suited to our multiple chains approach . [ table [ table_t1 ] about here figure [ fig_t1 ] about here figure [ fig_t2 ] about here figure [ fig_s1 ] about here ] we also evaluate the superiority of our ess algorithm , and in particular the fsmh scheme and the block crossover , with respect to more traditional emc implementations illustrated for instance in liang and wong ( 2000 ) . albeit we believe that using a wide portfolio of different moves enables any searching algorithm to better explore complicated model spaces , we reanalysed the first real data example , eqtl analysis , comparing : ( i ) ess with only fsmh as local move _ vs _ ess with only mc as local move ; ( ii ) ess with only block crossover _ vs _ ess with only 1-point , only uniform and only adaptive crossover respectively . to avoid dependency of the results on the initialisation of the algorithm, we replicated the analysis times .moreover , to make the comparison fair , in experiment ( i ) we run the two versions of ess for a different number of sweeps ( and with and as burn - in respectively ) , but matching the number of models evaluated .results are presented in table [ table_s1 ] .we report here the main findings : 1 . over the runs, ess with fsmh reaches the same top visited model % ( 17/25 ) while ess with mc the same top model only % , with a fixed , and % and % respectively with z - s prior .this ability is extended to the top models ranked by the posterior probability , data not shown , providing indirect evidence that the proposed new move helps the algorithm to increase its predictive power .the great superiority when fsmh scheme are implemented can be explained by comparing subplot ( a ) and ( c ) in figure [ fig_t1 ] : the exchange of information between chains for ess with mc as local move when ( and ) is rather poor , negating the purpose of emc .ess with mc has more difficulties to reach convergence in the product space and , in contrast to ess with fsmh , the retained chain does not easily escape from local modes .this later point can be seen looking at figure [ fig_t1 ] ( d ) which magnifies the right hand tail of the kernel density of for the recorded chain , pulling together the runs : interestingly ess with fsmh is less bumpy , showing a better ability to escape from local modes and to explore more efficiently the right tail .2 . regarding the second comparison when is fixed , ess with only block crossover beats constantly the other crossover operators , with % _ vs _ about % , in terms of best model visited ( table [ table_s1 ] ) and models with higher posterior probability ( data not shown ) , has higher acceptance rate ( table [ table_s2 ] ) , showing also a great capacity to accumulate posterior mass as illustrated in figure [ fig_s2 ] .the specific benefit of the block crossover is less pronounced when a prior on is specified , but we have already noticed that in this case having a hyperprior on greatly improves the efficiency of the search .[ table [ table_s1 ] about here table [ table_s2 ] about here figure [ fig_s2 ] about here ] we briefly report on a comprehensive study of the performance of ess in a variety of simulated examples as well as a comparison with sss . to make comparison with sss fair , we use ess , the version of our algorithm which assumes independent priors , ,with fixed at .details of the simulated examples ( 6 set - ups ) and how we conducted the simulation experiment ( 25 replication of each set - up ) are given in appendix [ performance_appendix ] .the rationale behind the construction of the examples was to benchmark our algorithm against both and cases , to use as building blocks intricate correlation structures that had been used in previous comparisons by g&mcc ( 1993 , 1997 ) and n&g ( 2004 ) , as well as a realistic correlation structure derived from genetic data , and to include elements of model uncertainty in some of the examples by using a range of values of regression coefficients .in our example we observe an effective exchange of information between the chains ( reported in table [ table_s3 ] ) which shows good overall acceptance rates for the collection of moves that we have implemented .the dimension of the problem does not seem to affect the acceptance rates in table [ table_s3 ] , remarkably since values of range from to between the examples .we also studied specifically the performance of the global moves ( table [ table_s4 ] ) to scrutinise our temperature tuning and confirmed the good performance of ess with good frequencies of swapping ( not far from the case where adjacent chains are selected to swap at random with equal probability ) and good measures of overlap between chains .all the examples were run in parallel with ess and sss 2.0 ( hans _ et al ._ , 2007 ) for the same number of sweeps ( 22,000 ) and matching hyperparameters on the model size .comparison were made with respect to the marginal probability of inclusion as well as the ability to reach models with high posterior probability and to persist in this region . for a detailed discussion of all comparison , see appendix [ performance_comparison ] .overall the covariates with non - zero effects have high marginal posterior probability of inclusion for ess in all the examples , see figure [ fig_s4 ] .there is good agreement between the two algorithms in general , with additional evidence on some examples ( figure [ fig_s4 ] ( c ) and ( d ) ) that ess is able to explore more fully the model space and in particular to find small effects , leading to a posterior model size that is close to the true one. measures of goodness of fit and stability , table [ table_s5 ] , are in good agreement between ess and sss .the comparison highlight that a key feature of sss , its ability to move quickly towards the right model and to persist on it , is accompanied by a drawback in having difficulty to explore far apart models with competing explanatory power , in contrast to ess ( contaminated example set - up ) .altogether ess shows a small improvement of , related to its ability to pick up some of the small effects that are missed by sss .finally ess shows a remarkable superiority in terms of computational time , especially when the simulated ( and estimated ) is large . altogetherour comparisons show that we have designed a fully bayesian mcmc - emc sampler which is competitive with the effective search provided by sss . in the same spirit of the real data example analysis, we also evaluate the superiority of the fsmh scheme with respect to more traditional emc implementations , i.e when a mc local move is selected . while both versions of the search algorithm visit almost the same top models ranked by the posterior probability , ess persists more on the top models . [ table [ table_s3 ] about here table [ table_s4 ] about here table [ table_s5 ] about here + figure [ fig_s3 ] about here figure [ fig_s4 ] about here ]the key idea in constructing an effective mcmc sampler for and is to add an extra parameter , the temperature , that weakens the likelihood contribution and enables escaping from local modes .running parallel chains at different temperature is , on the other hand , expensive and the added computational cost has to be balanced against the gains arising from the various exchanges between the chains .this is why we focussed on developing a good strategy for selecting the pairs of chains , using both marginal and joint information between the chains , attempting bold and more conservative exchanges .combining this with an automatic choice of the temperature ladder during burn - in is one of the key element of our ess algorithm . using pt in this way has the potential to be effective in a wide range of situations where the posterior space is multimodal .to tackle the case where is large with respect to , the second important element in our algorithm is the use of a metropolised gibbs sampling - like step performed on a subset of indices in the local updating of the latent binary vector , rather than an mc or rj - like updating move .the new fast scan metropolis hastings sampler that we propose to perform these local moves achieves an effective compromise between full gibbs sampling that is not feasible at every sweep when is large and vanilla add / delete moves .comparison of fsmh _ vs _ mc scheme on a real data example and simulation study shows the superiority of our new local move . when a model with a prior on the variable selection coefficient is preferred , the updating of itself present no particular difficulties and is computationally inexpensive .moreover , using an adaptive sampler makes the algorithm self contained without any time consuming tuning of the proposal variance .this latter strategy works perfectly well both in the -prior and independent prior case as illustrated in sections [ real data examples ] and [ simulation study ] .our current implementation does not make use of the output of the heated chains for posterior inference .whether gains in variance reduction could be achieved in the spirit of gramacy _ et al . _( 2007 ) is an area for further exploration , which is beyond the scope of the present work .our approach has been applied so far to linear regression with univariate response .an interesting generalisation is that of a multidimensional response and the identification of regressors that jointly predict the ( brown _ et al . _ , 1998 ) .much of our set - up and algorithm carries through without difficulties and we have already implemented our algorithm in this framework in a challenging case study in genomics with multidimensional outcomes .the authors are thankful to norbert hubner and timothy aitman for providing the data of the eqtl example , gareth roberts and jeffrey rosenthal for helpful discussions about adaptation and michail papathomas for his detailed comments .sylvia richardson acknowledges support from the mrc grant go.600609 .in this section we will describe some technical details omitted from the paper and related to the sampling schemes we used for the population of binary latent vectors and the selection coefficient .let , and to denote the latent binary indicator in the chain .as in kohn _( 2001 ) , let and + .furthermore let and and finally and . from ( [ t7 ] )it is easy to prove that where is the current model size for the chain . using the above equation , for the normalised version of ( [ t34 ] )can be written as ^{1/t_{l}}=\frac{\left . \theta_ { l , j}^{\left ( 1\right ) } \right . ^{1/t_{l}}\left .l_{l , j}^{\left ( 1\right ) } \right .^{1/t_{l}}}{s\left ( 1/t_{l}\right ) } , \label{al2}\ ] ] where with ^{1/t_{l}} ] is small as well .therefore for the gibbs sampler with a beta - binomial prior on the model space , the posterior probability of depends crucially on . in the following we derive a fast scan metropolis - hastings scheme specialised for evolutionary monte carlo or parallel tempering .we define as the proposal probability to go from to and the proposal probability to go from to for the variable and chain . moreover using the notation introduced before, the metropolis - within - gibbs version of ( [ t34 ] ) to go from to in the emc local move is with a similar expression for .the proof of the propositions are omitted since they are easy to check .we first introduce the following proposition which is useful for the calculation of the acceptance probability in the fsmh scheme .[ prop1 ] the following three conditions are equivalent : a ) ; + b ) ; c) , where is the convex combination of the marginal likelihood and with weights and .the fsmh scheme can be seen as a random scan metropolis - within - gibbs algorithm where the number of evaluations is linked to the prior / current model size and the temperature attached to the chain .the computation requirement for the additional acceptance / rejection step is very modest since the normalised tempered version of ( [ al1 ] ) is used .[ prop3 ] let , ( or any permutation of them ) , and with .the acceptance probabilities are the above sampling scheme works as follows .given the chain , if ( and similarly for ) , it proposes the new value from a bernoulli distribution with probability : if the proposed value is different from the current one , it evaluates ( [ al6 ] ) ( and similarly [ al7])otherwise it selects a new covariate .finally it can be proved that the gibbs sampler is more efficient than the fsmh scheme , i.e. for a fixed number of iterations , gibbs sampling mcmc standard error is lower than for fsmh sampler .however the gibbs sampler is computationally more expensive so that , if is very large , as described in kohn _ et al . _ ( 2001 ) , fsmh scheme becomes more efficient per floating point operation .the exchange operator can be seen as an extreme case of crossover operator , where the first proposed chain receives the whole second chain state , and the second proposed chain receives the whole first state chain , respectively . in order to achieve a good acceptance rate ,the exchange operator is usually applied on adjacent chains in the temperature ladder , which limits its capacity for mixing . to obtain better mixing, we implemented two different approaches : the first one is based on jasra _ et al . _( 2007 ) and the related idea of delayed rejection ( green and mira , 2001 ) ; the second one on gibbs distribution over all possible chains pairs ( calvo , 2005 ) . 1. the delayed rejection exchange operator tries first to swap the state of the chains that are usually far apart in the temperature ladder , but , once the proposed move has been rejected , it performs a more traditional ( uniform ) adjacent pair selection , increasing the overall mixing between chains on one hand without drastically reducing the acceptance rate on the other .however its flexibility comes at some extra computational costs and in particular the additional evaluation of the pseudo move necessary to maintain detailed balance ( green and mira , 2001 ) .details are reported below .+ suppose two chains are selected at random , and with , in order to swap their binary latent vector .then , given that , and , ( 13 ) reduces to since the two chains are selected at random , the above acceptance probability decreases exponentially with the difference and therefore most of the proposed moves are rejected .if rejected , a delayed rejection - type move is applied between two random adjacent chains , with the first one and , , the second one , giving rise to the new acceptance probability where the pseudo move is necessary in order to maintain the detailed balance condition ( green and mira , 2001 ) .2 . alternatively, we attempt a bolder all - exchange operator . swapping the state of two chains that are far apart in the temperature ladder speeds up the convergence of the simulation since it replaces several adjacent swaps with a single move .however , this move can be seen as a rare event whose acceptance probability is low and unknown .since the full set of possible exchange moves is finite and discrete , it is easy and computationally inexpensive to calculate all the exchange acceptance rates between all chains pairs , inclusive the rare ones , . to maintain detailed balance condition, the possibility not to perform any exchange ( rejection ) must be added with unnormalised probability one .finally the chains whose states are swopped are selected at random with probability equal to where in ( [ t32 ] ) each pair is denoted by a single number , , including the rejection move , .first we select the number of chains close to the complexity of the problem , i.e. , although the size of the data and computational limits need to be taken into account .secondly , we fix a first stage temperature ladder according to a geometric scale such that , , with relatively large , for instance .finally , we adopt a strategy similar to the one described in roberts and rosenthal ( 2008 ) , but _ restricted to the burn - in stage _ , monitoring only the acceptance rate of the delayed rejection exchange operator . after the batch of emc sweeps , to be chosen but usually set equal to , we update , the value of the constant up to the batch , by adding or subtracting an amount such that the acceptance rate of the delayed rejection exchange operator is as close as possible to ( liu , 2001 ; jasra _ et al ._ , 2007 ) , .specifically the value of is chosen such that at the end of the burn - in period the value of can be 1 . to be precise, we fix the value of as , where is the first value assigned to the geometric ratio and is the total number of batches in the burn - in period . under model ( 1 ) andprior specification for , ( 2 ) and ( 3 ) , we provide the laplace approximation of for the -prior case , while the approximation for the independent case can be derived following the same line of reasoning . for easy of notationwe drop the chain subscript index and we assume that the observed responses have been centred with mean , i.e. . in the followingwe will distinguish the cases in which the posterior mode is a solution of a cubic or quadratic equation .conditions on the existence of the solutions are provided as well as those that guarantee the positive semidefiniteness of the variance approximation . recall that where is the posterior mode after the transformation , which is necessary to avoid problems on the boundary , is the approximate squared root of the variance calculated in and is the jacobian of the transformation .details about laplace approximation can be found in tierney and kadane ( 1986 ) .similar derivations when are presented in liang _finally throughout the presentation we will assume that and that and are fixed small as in kohn _( 2001 ) .+ if the posterior mode is the only positive root of the integrand function \right\ } ^{-\left ( 2a_{\sigma } + n-1\right)/2 } \frac{e^{-b_{\tau}/e^{\lambda } } } { \left ( e^{\lambda } \right ) ^{a_{\tau}+1}}e^{\lambda } , \ ] ] where the last factor in the above equation is the jacobian of the transformation . after the calculus of the first derivative of the log transformation and some algebra manipulations, it can be shown that is the solution of the cubic equation and that _ { \lambda = \hat{\lambda}}^{-1 } , \label{al2_bis}\end{aligned}\ ] ] where , , and . following liang _( 2008 ) , since , because , and , because , at least one real positive solution exists .moreover since , the remaining two real solutions should have the same sign ( abramowitz and stegun , 1970 ) .a necessary condition for the existence of just one real positive solution is that the summation of all the pairs - products of the coefficients is negative and this happens if .when and thus , the above condition corresponds to and when , as especially when is large , which might be expected when becomes large , the condition is equivalent to .therefore it turns out that a sufficient condition for the existence of just one real positive solution in ( [ al1 ] ) is .the positive semidefiniteness of the approximate variance can be proved as follows .first of all it is worth noticing that all the terms in ( [ al2_bis ] ) are of the same order . then , when , the positive semidefiniteness is always guaranteed , while when , provided that is large , the middle term in ( [ al2_bis ] ) tends to zero and the condition is fulfilled if . + if , with , is only the positive root of the integrand function \right\ } ^{-\left ( 2a_{\sigma } + n-1\right ) /2}e^{\lambda } \ ] ] or , after the first derivative of the log transformation , the solution of the quadratic equation with /2 ] and we fix them equal to and respectively . in practise these bounds do not create any restriction since the sequence of the standard deviations of the proposal distribution stabilises almost immediately , indicating that the transition kernel converges in a bounded number of batches , see figure [ fig_t2 ] .in this section we report in details on the performance of ess in a variety of simulated examples .main conclusions are summarised in the section [ simulation study ] .firstly we analyse the simulated examples with ess the version of our algorithm which assumes independent priors , , so as to enable comparisons with sss which also implements an independent prior .moreover , in order to make to comparison with sss fair , in the simulation study only the first step of the algorithm described in section 3.3 is performed , with fixed at . as in sss , standardisation of the covariatesis done before running ess .we run ess and sss 2.0 ( hans _ et al ._ , 2007 ) for the same number of sweeps ( 22,000 ) and with matching hyperparameters on the model size .secondly , to discuss the mixing properties of ess when a prior is defined on , we implement both the -prior and independent prior set - up for a particular simulated experiment . to be precise in the former case we will use the zellner - siow priors ( [ t9 ] ) , and for the latter we will specify a proper but diffuse exponential distribution as suggested by bae and mallick ( 2004 ) .we apply ess with independent priors to an extensive and challenging range of simulated examples with fixed at : the first three examples ( ex1-ex3 ) consider the case while the remaining three ( ex4-ex6 ) have .moreover in all examples , except the last one , we simulate the design matrix , creating more and more intricated correlation structures between the covariates in order to test the proposed algorithm in different and increasingly more realistic scenarios . in the last example, we use , as design matrix , a genetic region spanning -kb from the hapmap project ( altshuler _ et al . _ , 2005 ) .simulated experiments ex1-ex5 share in common the way we build . in order to create moderate to strong correlation, we found useful referring to two simulated examples in george and mcculloch , g&mcc hereafter , ( 1993 ) and in g&mcc ( 1997 ) : throughout we call ( ) and the design matrix obtained from these two examples .in particular the column of , indicated as , is simulated as , where iid independently form , inducing a pairwise correlation of . is generated as follows : firstly we simulated iid and we set for only . to induce strong multicollinearity , we then set , , , and .a pairwise correlation of about 0.998 between and for is introduced and similarly strong linear relationship is present within the sets and . then , as in nott and green , n&g hereafter , ( 2004 ) example 2 , more complex structures are created by placing side by side combinations of and/or , with different sample size .we will vary the number of samples in and as we construct our examples .the levels of are taken from the simulation study of fernndez _ et al . _( 2001 ) , while the number of true effects , , with the exception of ex3 , varies from to .finally the simulated error variance ranges from to in order to vary the level of difficulty for the search algorithm . throughoutwe only list the non - zero and assume that .the six examples can be summarised as follows : * is a matrix of dimension , where the responses are simulated from ( 1 ) using , , , and . in the followingwe will not refer to the intercept any more since , as described in section 3.3 in the paper , we consider centred and hence there is no difference in the results if the intercept is simulated or not .this is the simplest of our example , although , as reported in g&mcc ( 1993 ) the average pairwise correlation is about , making it already hard to analyse by standard stepwise methods .* this example is taken directly from n&g ( 2004 ) , example 2 , who first introduce the idea of combining simpler building blocks to create a new matrix : in their example ] ( dimension ) .secondly we place side by side five copies of , ] , with , a larger version of .we partitioned the responses such that ^{t} ] , by visual inspection each chain _ marginally _ reaches its _ equilibrium _ with respect to the others ; moreover , thanks to the automatic tuning of the temperature placement during the burn - in , the distributions of their log posterior probabilities overlap nicely , allowing effective exchange of information between the chains . figure [ fig_s3 ] , bottom panels , shows the trace plot of the log posterior and the model size for a replicate of ex4 .we can see that also in the case , the chains mix and overlap well with no gaps between them , the automatic tuning of the temperature ladder being able to improve drastically the performance of the algorithm .this effective exchange of information is demonstrated in table [ table_s3 ] which shows good overall acceptance rates for the collection of moves that we have implemented .the dimension of the problem does not seem to affect the acceptance rate of the ( delayed rejection ) exchange operator which stays very stable and close to the target : for instance in ex4 ( ) and ex6 ( ) the mean and standard deviation of the acceptance rate are ( ) and ( ) while in ex5 ( ) we have ( ) : the higher variability in ex4 being related to the model size . with regards to the crossover operators ,again we observe stability across all the examples .moreover , in contrast to jasra _( 2007 ) , when , the crossover average acceptance rate across the five chains is quite stable between , ex4 , and , ex6 ( with the lower value in ex4 here again due to ) : within our limited experiments , we believe that the good performance of crossover operator is related to the selection operator and the new block crossover , see section [ emc sampler ] . some finer tuning of the temperature ladder could still be performed as there seems to be an indication that fewer global moves are accepted with the higher temperature chain , see table [ table_s4 ] , where swapping probabilities for each chain are indicated .note that the observed frequency of successful swaps is not far from the case where adjacent chains are selected to swap at random with equal probability .other measures of overlapping between chains ( liang and wong , 2000 ; iba 2001 ) , based on a suitable index of variation of across sweeps , confirm the good performance of ess .again some instability is present in the high temperature chains , see in table [ table_s4 ] the overlapping index between chains and in example 3 to 6 .in ex1 , we also investigate the influence of different values of the prior mean of the model size .we found that the average ( standard deviation in brackets ) acceptance rate across replicates for the delayed rejection exchange operator ranges from ( ) to 0.500 ( 0.040 ) for different values of the prior mean on the model size , while the acceptance rate for the crossover operator ranges from ( ) to ( ) .this strong stability is not surprising because the automatic tuning modifies the temperature ladder in order to compensate for .finally we notice that the acceptance rates for the local move , when , increases with higher values of the prior mean model size , showing that locally the algorithm moves more freely with than with .we conclude this section by discussing in details the overall performance of ess with respect to the selection of the true simulated effects . as a first measure of performance ,we report for all the simulated examples the marginal posterior probability of inclusion as described in g&mcc ( 1997 ) and hans _ et al . _( 2007 ) . in the following , for ease of notation, we drop the chain subscript index and we exclusively refer to the first chain .to be precise , we evaluate the marginal posterior probability of inclusion as : with and the number of sweeps after the burn - in .the posterior model size is similarly defined , , with as before .besides plotting the marginal posterior inclusion probability ( [ l35 ] ) averaged across sweeps and replicates for our simulated examples , we will also compute the interquartile range of ( [ l35 ] ) across replicates as a measure of variability . in order to thoroughly compare the proposed ess algorithm to sss ( hans _ et al ._ , 2007 ) , we present also some other measures of performance based on and : first we rank in decreasing order and record the indicator that corresponds to the maximum and largest ( after burn - in ) . given the above set of latent binary vectors , we then compute the corresponding leading to : as well as the mean over the largest , : largest , both quantities averaged across replicates .moreover the actual ability of the algorithm to reach regions of high posterior probability and persist on them is monitored : given the sequence of the best ( based on ) , the standard deviation of the corresponding shows how stable is the searching strategy at least for the top ranked ( not unique ) posterior probabilities : averaging over the replicates , it provides an heuristic measures of stability of the algorithm .finally we report the average computational time ( in minutes ) across replicates of ess written in matlab code and run on a 2mhz cpu with 1.5 gb ram desktop computer and of sss version 2.0 on the same computer .figure [ fig_s4 ] presents the marginal posterior probability of inclusion for ess with averaged across replicates and , as a measure of variability , the interquartile range , blue left triangles and vertical blue solid line respectively . in generalthe covariates with non - zero effects have high marginal posterior probability of inclusion in all the examples : for example in ex3 , figure [ fig_s4 ] ( a ) , the proposed ess algorithm , blue left triangle , is able to perfectly select the last covariates , while the first , which do not contribute to , receive small marginal posterior probability .it is interesting to note that this group of covariates , , although correctly recognised having no influence on , show some variability across replicates , vertical blue solid line : however , this is not surprising since independent priors are less suitable in situations where all the covariates are mildly - strongly correlated as in this simulated example . on the other hand the second set of covariates with small effects , ,are univocally detected .the ability of ess to select variables with small effects is also evident in ex6 , figure [ fig_s4 ] ( d ) , where the two smallest coefficients , and ( the second and last respectively from left to right ) , receive from high to very high marginal posterior probability ( and similarly for the other replicates , data not shown ) . in some cases however , some covariates attached with small effects are missed ( e.g. ex4 , figure [ fig_s4 ] ( b ) , the last simulated effect which is also the smallest , , is not detected ) . in this situationhowever the vertical blue solid line indicates that for some replicates , ess is able to assign small values of the marginal posterior probability giving evidence that ess fully explore the whole space of models .superimposed on all pictures of figure [ table_s4 ] are the median and interquartile range across replicates of , , for sss , red right triangles and vertical red dashed line respectively .we see that there is good agreement between the two algorithms in general , with in addition evidence that ess is able to explore more fully the model space and in particular to find small effects , leading to a posterior model size that is close to the true one .for instance in ex3 , figure [ fig_s4 ] ( a ) , where the last covariates accounts for most of , sss has difficulty to detect , while in ex6 , it misses , the smallest effect , and surprisingly also assigning a very small marginal posterior probability ( and in general for the small effects in most replicates , data not shown ) .however the most marked difference between ess and sss is present in ex5 : as for ess , sss misses three effects of model 1 but in addition , and receive also very low marginal posterior probability , red right triangle , with high variability across replicates , vertical red dashed line .moreover on the extreme left , as noted before , ess is able to capture the biggest coefficient of model 2 while sss misses completely all contaminated effects .no noticeable differences between ess and sss are present in ex1 and ex2 for the marginal posterior probability , while in ex4 , sss shows more variability in ( red dashed vertical lines compared to blue solid vertical lines ) for some covariates that do receive the highest marginal posterior probability .in contrast to the differences in the marginal posterior probability of inclusion , there is general agreement between the two algorithms with respect to some measures of goodness of fit and stability , see table [ table_s5 ] .again , not surprisingly , the main difference is seen in ex5 where ess with reaches a better both for the maximum and the largest .sss shows more stability in all examples , but the last : this was somehow expected since one key features of sss in its ability to move quickly towards the right model and to persist on it ( hans _ et al ._ , 2007 ) , but a drawback of this is its difficulty to explore far apart models with competing as in ex5 . note that ess shows a small improvement of in all the simulated examples .this is related to the ability of ess to pick up some of the small effects that are missed by sss , see figure [ fig_s4 ] .finally ess shows a remarkable superiority in terms of computational time especially when the simulated ( and estimated ) is large ( in other simulated examples , data not shown , we found this is always true when ) : the explanation lies in the number of different models sss and ess evaluate at each sweep .indeed , sss evaluates , where is the size of the current model , while ess theoretically analyses an equally large number of models , , but , when , the actual number of models evaluated is drastically reduced thanks to our fsmh sampler . in only onecase sss beats ess in term of computational time ( ex5 ) , but in this instance sss clearly underestimates the simulated model and hence performs less evaluations than would be necessary to explore faithfully the model space . in conclusion , we see that the rich porfolio of moves and the use of parallel chains makes ess robust for tackling complex covariate space as well as competitive against a state of the art search algorithm .= 1.5em = 1 abramowitz , m. and stegun , i. ( 1970 ) ._ handbook of mathematical functions_. new york : dover publications , inc . = 1em = 1 chipman , h. , george , e.i . and mcculloch , r.e .( 2001 ) . the practical implementation of bayesian model selection ( with discussion ) . in _model selection _( p. lahiri , ed ) , 66 - 134 .ims : beachwood , oh . = 1em = 1 geweke , j. ( 1996 ) .variable selection and model comparison in regression . in _bayesian statistics 5 , proc .5th int . meeting _bernardo , j.o .berger , a.p .dawid and a.f.m .smith , eds ) , 609 - 20 .claredon press : oxford , uk .= 1em = 1 wilson , m.a . ,iversen , e.s . ,clyde , m.a . ,schmidler , s.c . and shildkraut , j.m .bayesian model search and multilevel inference for snp association studies .available at : + ` http://arxiv.org/abs/0908.1144 ` = 1em = 1 zellner , a. ( 1986 ) . on assessing prior distributions and bayesian regression analysis with -prior distributions . in _bayesian inference and decision techniques - essays in honour of bruno de finetti _goel and a. zellner , eds ) , 233 - 243 .amsterdam : north - holland .= 1em = 1 zellner , a. and siow , a. ( 1980 ) .posterior odds ratios for selected regression hypotheses . in _bayesian statistics , proc .bernardo , m.h .de groot , d.v .lindley and a.f.m .smith , eds ) , 585 - 603 .valencia : university press .replicates of the analysis of the first real data example and normalised by the total mass found by ess , , with only block crossover move ( ) .1-point and uniform crossover accumulate around % of the total mass accumulated by ess with only block crossover , while adaptive crossover only %.,title="fig : " ]
|
implementing bayesian variable selection for linear gaussian regression models for analysing high dimensional data sets is of current interest in many fields . in order to make such analysis operational , we propose a new sampling algorithm based upon evolutionary monte carlo and designed to work under the large , small paradigm , thus making fully bayesian multivariate analysis feasible , for example , in genetics / genomics experiments . two real data examples in genomics are presented , demonstrating the performance of the algorithm in a space of up to covariates . finally the methodology is compared with a recently proposed search algorithms in an extensive simulation study . _ keywords _ : evolutionary monte carlo ; fast scan metropolis - hastings schemes ; linear gaussian regression models ; variable selection .
|
consider an incompressible isothermal and isotropic binary solution of constant molar volume which is initially homogeneous .if the temperature is rapidly quenched under a critical value , the two solutes , a and b , begin to separate from each other and the domain is split in a - rich and b - rich subdomains .the early stage of this process in which the solution is called spinodal decomposition . in order to descrive this phenomenon ,a well - known mathematical approach was introduced by j.w .cahn and j.e .hilliard ( see ) who introduced an appropriate free energy functional .more precisely , given a bounded domain with occupied by a binary mixture of components a and b with a mass fraction of and , respectively , setting , the free energy functional is given by the term is a surface tension term and is proportional to the thickness of the diffused interface . is a double well potential energy density which favors the separation of phases , that is , the formation of a - rich and b - rich regions .although the physically relevant potential is logarithmic ( see ( * ? ? ?* ; * ? ? ?* ( 3.1 ) ) ) , in the literature it is usually approximated , for instance , by the following polynomial function once an energy is defined , the spinodal decomposition can be viewed as an energy minimizing process .thus , the evolution of the phenomenon can be described as a gradient flow ( cf . , see also ) on a given time interval , , where is called mobility and the chemical potential is the first variation of , that is , this is the well - known cahn - hilliard equation which has been widely studied by many authors .here we just mention some pioneering works for the case with constant mobility , while for nonconstant ( and degenerate at pure phases ) mobility the basic reference is ( cf .also for nondegenerate mobility ) . for further referencessee , for instance , . however , the purely phenomenological derivation of the cahn - hilliard equation is somewhat unsatisfactory from the physical viewpoint .this led giacomin and lebowitz to study the problem of phase separation from the microscopic viewpoint using a statistical mechanics approach ( see and also ) . performing the hydrodynamic limit they deduced a continuum model which is a nonlocal version of the cahn - hilliard equation , namely , they found the following where is defined by where is a convolution kernel such that and . in this case , the free energy functional is given by it is worth observing that can be recovered from using taylor expansion for the term and supposing that is sufficiently peaked around zero .thus the original cahn - hilliard equation can be seen as an approximation of its local version .furthermore , as discussed in , the corresponding sharp interface model is the same as the local cahn - hilliard equation .the nonlocal cahn - hilliard equation with degenerate mobility and logarithmic potential was first studied rigorously in .then more general results were proven in .the longtime behavior of single solutions was studied in , while the global longterm dynamics ( say , existence of global and exponential attractors ) was investigated in .the case of constant mobility and smooth potential was treated in .more recently , some works have been devoted to study the nonlocal cahn - hilliard equation coupled with the navier - stokes system ( see ) .this is a nonlocal variant of a well - known diffuse interface model for phase separation in two - phase incompressible and isothermal fluids ( model h ) .all such contributions produced , as by - products , further results on the nonlocal cahn - hilliard equation . taking advantage of some of these results , herewe want to introduce and study two nonlocal cahn - hilliard equations with reaction terms which have already been proposed and analyzed in the local case : the cahn - hilliard - oono ( cho ) equation and the cahn - hilliard - bertozzi - esedoglu - gillette ( chbeg ) equation ( see also for other types of reaction terms ) .let us introduce the former first . in authors introduce a reaction between the molecules or the polymers in the mixture in order to stabilize and control the length of the patterns during spinodal decomposition .let us suppose that the mixture consists of molecules or polymers of type a and b and that a chemical reaction {\gamma_1 } b ] such that ( h5 ) : : .( h6 ) : : is a given positive constant .( h7 ) : : is a given constant .( h8 ) : : .[ perturbation ] assumption ( h2 ) implies that the potential is a quadratic perturbation of a strictly convex function .indeed can be represented as with strictly convex , since in . here and observe that derives from ( h1 ) .[ growth ] since is bounded from below , it is easy to see that ( h4 ) implies that has polynomial growth of order , where is the conjugate index to p. namely there exist and such that the potential satisfies all the hypotheses on .[ h4 ] it is easy to show that ( h4 ) implies furthermore ( h3 ) implies that take for simplicity in equation and observe that it can formally be rewritten as follows from which the crucial role of ( h2 ) is evident .this remark also holds for .we can now give our definition of weak solution to problem : [ soluzione debole ] let such that and be given .then is a weak solution to problem on ] .observe that if we choose in and we set , then we obtain thus the total mass is not conserved in general .existence and uniqueness for problem is given by : [ buona posizione ] let such that and suppose(h0)-(h8 ) are satisfied .then , for every there exists a unique weak solution to problem on ] and ; v') ] for some maximal time ] corresponding to if : and for every and for almost any . here is given by .[ l2oth ] observe that , thanks to hypotheses ( h8 ) and ( i6)-(i7 ) , if then .we are now ready to state the existence theorem : [ buona posizione chb ] let such that and suppose ( h0)-(h5 ) , ( h8 ) , ( i6)-(i7 ) are satisfied .then , for every given , there exists a weak solution to problem on ] corresponding to , there exists a positive constant such that : assumption ( i8 ) implies that .we employ the schauder fixed point theorem in one of his many variants .let us consider the following problem \displaystyle \mu = a{\varphi}-j\ast{\varphi}+ f'({\varphi } ) & \text{in }\\[1.4ex ] \displaystyle \frac{\partial\mu}{\partial n}=0 , & \text{on } \\[1.4ex ] \displaystyle { \varphi}(0)={\varphi}_0 , & \text{in }. \end{cases}\ ] ] where .for any given , theorem [ buona posizione ] entails that there exists a unique weak solution to ( just take ) .let , set for all , and denote by the closed ball of of radius centered at .here we show that there exists such that .suppose .then the energy identity gives adding and subtracting to in the last term of the energy equality yields observe now that and ( cf . ) thus and yield similarly , we have by exploiting , on account of and , we obtain from the following inequality observe that furthermore , by arguing as in the proof of theorem [ buona posizione ] to obtain , we know that ( h3 ) entails that there is such that thus , integrating with respect to time between and , we get ( cf.(i7 ) and ( i8 ) ) ,\end{aligned}\ ] ] where is such that then , an application of the gronwall lemma provides using ( h3 ) once more to control , we end up with ( cf . ) ,\ ] ] so that for some .let be a bounded sequence in and consider .then , it is easy to prove that every satisfies the energy equality with and .also , by arguing as in the previous subsection , we find that also satisfies for every .in particular , we have that where is independent of . as a direct consequence of, we have that for every .thus , from , and we deduce then , by comparison in the equation , we find from - we infer the existence of and with , such that , for a non - relabeled subsequence , we have in particular , the strong convergence of proves that is compact in . in order to prove the continuity of just assume that converges to some in .then , on account of the above bounds and of the uniqueness for problem , we have that converges to .we can thus conclude that has a fixed point which is a local weak solution .we know that satisfies the energy identity on some maximal interval .observe now that ( cf . and ) furthermore , since ( h2 ) implies , thanks to remark [ h4 ] we get on the other hand , we deduce from that then , integrating with respect to time from and and by exploiting , , and , we obtain gives , on account of ( i8 ) , that . in particular , note that .consider two solutions , and to with initial data and , respectively .set , , , and observe that for any and almost everywhere in , with initial condition .taking , equation yields therefore we have let us take .this gives in order to estimate the reaction term we observe that so that furthermore , there holds and , owing to ( i8 ) , we have assumption ( i8 ) also entails , .thus we obtain collecting - and - , we get besides we have that we add now to and we find so that gronwall s lemma yields .[ gacho ] let us take for the sake of simplicity and suppose that hypotheses ( h0)-(h2 ) and ( h4 ) , ( h6)-(h7 ) hold .furthermore , replace ( h3 ) and ( h5 ) by , respectively ( h9 ) : : there exist , and such that ( h10 ) : : .then , on account of theorem [ buona posizione ] and proposition [ stability ] , for any such that , there exists a unique global weak solution . as a consequence we can define a semigroup on a suitable phase space . more precisely , we define and we equip it with the distance then , for any , we set being the unique global solution to .we will show that the dynamical system is dissipative , that is , it has a bounded absorbing set .then , following the strategy outlined in , we prove that the same system has the ( connected ) global attractor .[ s absorbing ] let ( h0)-(h2 ) , ( h4 ) , ( h6)-(h7 ) , ( h9)-(h10 ) hold. then has a bounded absorbing set .we adapt ( * ? ? ?* proof of corollary 2 ) . by exploiting , , and in the energy equality( with ) , we obtain observe that on the other hand , taking in , we get therefore , recalling remark [ growth ] , we deduce that for every bounded set of there exists and a positive constant such that then , on account of remark [ perturbation ] , using - we get where for all ] we have ^ 2&\leq 2t^2\|u\|^2 + 2\bigl| { \int_\omega}f(t \psi)-{\int_\omega}f(0)\bigr|\\&\leq 2c + 2t^2 \|\psi\|^2 + 2{\int_\omega}f(t \psi ) + 2\bigl| { \int_\omega}f(0)\bigr|\end{aligned}\ ] ] and , thanks to remark [ perturbation ] , we obtain ^ 2 \leq 2c+2 t \bigl ( t + \frac{a^*}{2 } \bigr ) \| \psi \|^2 + 2t\bigl| { \int_\omega}f(u)-{\int_\omega}f(0)\bigr| + 4\bigl| f(0 ) \bigr|.\ ] ] this implies that is connected .hence the global attractor is also connected ( see ( * ? ? ?* corollary 4.3 ) ) .[ gachbeg ] here we show that problem can also be viewed as a dissipative dynamical system which possesses a connected global attractor .let us assume that assumptions ( h0)-(h1 ) , ( i6)-(i7 ) and ( h10 ) .then take for the sake of simplicity and set ( i9 ) : : ( i10 ) : : such that .these further restrictions are due to the peculiar difficulty of this equation , that is , the uniform control of ( see for the local case ) .indeed , the time dependent average can not be easily controlled by as in the case of cho equation . clearly ( i9 ) entails ( h2)-(h4 ) and ( i8 ) .such assumptions ensures that for any , where there is a unique global weak solution owing to theorems [ buona posizione chb ] and [ buona posizione chb2 ] .thus we can define a semigroup by setting . here is equipped with the metric . as in the previous section, we will show that the dynamical system has a bounded absorbing set .then , the same argument used to prove theorem [ agcho ] will lead to the existence of the ( connected ) global attractor .however , a crucial preliminary step is the uniform ( dissipative ) bound of for all . in this sectionwe extend the approach devised in for the local chbeg equation to prove the following : [ media controllata a dovere ] let ( h0)-(h1 ) , ( i6)-(i7 ) , ( i9)-(i10 ) hold . if , then there exist and such that latexmath:[\[\label{chb tratt 15 } choosing in with we find ( cf . ) then , let us multiply by , integrate over and subtract it to .this gives on the other hand , since , we have besides , following , we get thus yields by arguing as in the proof of corollary [ buona posizione chb2 ] we find besides , we have let us now choose in and exploit - .we obtain observe now that therefore we infer and using the gronwall lemma , we deduce that let .then we can find such that for all such that there holds by integrating with respect to time between and , we get choose now in .this gives since , thanks to ( h2 ) and young s lemma , we have besides , there holds observing now that and recalling ( i10 ) , we obtain taking - into account , from we deduce then , by means of the uniform gronwall lemma , on account of and , we find furthermore , by integrating inequality with respect to time between and , we get and thanks to we are led to we are now ready to recover an estimate for .let us rewrite equation as so that and thus from we infer , we can use and to control the last term of the above inequality therefore , from and we deduce .thanks to proposition [ media controllata a dovere ] and arguing as in the proof of proposition [ s absorbing ] , we can prove the following [ s absorbing chb ] let ( h0)-(h1 ) , ( i6)-(i7 ) , ( i9)-(i10 ) hold . then has a bounded absorbing set . finally ,adapting the proof of theorem [ agcho ] , we get let the assumptions of proposition [ s absorbing chb ] hold. then has the ( connected ) global attractor .this work was the subject of the first author s master s thesis at the politecnico di milano .his work was partially supported by the engineering and physical sciences research council [ ep / l015811/1 ] .the second author is a member of the gruppo nazionale per lanalisi matematica , la probabilit e le loro applicazioni ( gnampa ) of the istituto nazionale di alta matematica ( indam ) .( mr3108851 ) [ 10.3934/dcdsb.2013.18.2211 ] a. c. aristotelous , o. karakashian and s. m. wise , _ a mixed discontinuous galerkin , convex splitting scheme for a modified cahn - hilliard equation and an efficient nonlinear multigrid solver _ , _ discrete contin .b _ , * 18 * ( 2013 ) , 22112238 .( mr1606601 ) [ 10.1007/s003329900037 ] j. m. ball , _ continuity properties and global attractors of generalized semiflows and the navier - stokes equation _ , _ j. nonlinear sci ._ , * 7 * ( 1997 ) , 475502 ( erratum , j. nonlinear sci . * 8 * ( 1998 ) , p233 ) .( mr3180634 ) [ 10.1002/mma.2832 ] s. bosia , m. grasselli and a. miranville , _ on the longtime behavior of a 2d hydrodynamic model for chemically reacting binary fluid mixtures _ , _ math .methods appl ._ , * 37 * ( 2014 ) , 726743 .( mr2496714 ) [ 10.1137/080728809 ] r. choksi , m. a. peletier and j. f. williams , _ on the phase diagram for microphase separation of diblock copolymers : an approach via a nonlocal cahn - hilliard functional _ , _ siam j. appl . math ._ , * 69 * ( 2009 ) , 17121738 .( mr2854591 ) [ 10.1137/100784497 ] r. choksi , m. maras and j. f. williams , _2d phase diagram for minimizers of a cahn - hilliard functional with long - range interactions _ , _ siam j. appl ._ , * 10 * ( 2011 ) , 13441362 .( mr3253242 ) [ 10.3934/dcdsb.2014.19.2013 ] l. cherfils , a. miranville and s. zelik , _ on a generalized cahn - hilliard equation with biological applications _ , _ discrete contin .b _ , * 19 * ( 2014 ) , 20132026 .( mr2834896 ) [ 10.1016/j.jmaa.2011.08.008 ] p. colli , s. frigeri and m. grasselli , _ global existence of weak solutions to a nonlocal cahn - hilliard - navier - stokes system _ , _ j. math ._ , * 386 * ( 2012 ) , 428444 .( mr3000606 ) [ 10.1007/s10884 - 012 - 9272 - 3 ] s. frigeri and m. grasselli , _ global and trajectory attractors for a nonlocal cahn - hilliard - navier - stokes system _ , _ j. dynam .differential equations _ , * 24 * ( 2012 ) , 827856 .[ 10.1103/physrevlett.76.1094 ] g. giacomin and j. l. lebowitz , _ exact macroscopic description of phase segregation in model alloys with long range interactions _ , _ phys ._ , * 76 * ( 1996 ) , 10941097 .( mr1453735 ) [ 10.1007/bf02181479 ] g. giacomin and j. l. lebowitz , _ phase segregation dynamics in particle systems with long range interactions . i. macroscopic limits _ , _ j. stat ._ , * 87 * ( 1997 ) , 3761 . ( mr1638739 ) [ 10.1137/s0036139996313046 ] g. giacomin and j. l. lebowitz , _ phase segregation dynamics in particle systems with long range interactions .ii . interface motion _ , _ siam j. appl_ , * 58 * ( 1998 ) , 17071729 . [ 10.1002/mats.200300021 ] y. huo , h. zhang and y. yang , _ effects of reversible chemical reaction on morphology and domain growth of phase separating binary mixtures with viscosity difference _ ,_ macromol .theory simul ._ , * 13 * ( 2004 ) , 280289 .( mr2784354 ) [ 10.1016/j.jmaa.2011.02.003 ] s .- o . londen and h. petzeltov , _ regularity and separation from potential barriers for a non - local phase - field system _ , _ j. math ._ , * 379 * ( 2011 ) , 724735 .( mr2508165 ) [ 10.1016/s1874 - 5717(08)00003 - 0 ] a. miranville and s. zelik , _attractors for dissipative partial differential equations in bounded and unbounded domains _ , _ handbook of differential equations : evolutionary equations _ , * iv * ( 2008 ) , 103200 .( mr0976973 ) [ 10.1080/03605308908820597 ] b. nicolaenko , b. scheurer and r. temam , _ some global dynamical properties of a class of pattern formation equations _ , _ comm .partial differential equations _ , * 14 * ( 1989 ) , 245297 .
|
we introduce and analyze the nonlocal variants of two cahn - hilliard type equations with reaction terms . the first one is the so - called cahn - hilliard - oono equation which models , for instance , pattern formation in diblock - copolymers as well as in binary alloys with induced reaction and type - i superconductors . the second one is the cahn - hilliard type equation introduced by bertozzi et al . to describe image inpainting . here we take a free energy functional which accounts for nonlocal interactions . our choice is motivated by the work of giacomin and lebowitz who showed that the rigorous physical derivation of the cahn - hilliard equation leads to consider nonlocal functionals . the equations also have a transport term with a given velocity field and are subject to a homogenous neumann boundary condition for the chemical potential , i.e. , the first variation of the free energy functional . we first establish the well - posedness of the corresponding initial and boundary value problems in a weak setting . then we consider such problems as dynamical systems and we show that they have bounded absorbing sets and global attractors . francesco della porta maurizio grasselli +
|
in recent years pedestrian dynamics has gained more importance and a lot of attention due to continuously growing urban population and cities combined with an increase of mass events .this sets new challenges to architects , urban planners and organizers of mass events .one of the main goals is an effective use of the designed facility , for instance by minimizing jams thereby optimizing the traffic flow . in this context pedestrian simulationsare already used , e.g. for escape route design .however the approaches used in these simulations are only the first step to model the manifold influences on human beings during an evacuation process .major issues in this area include orientation and way finding : given a set of possible routes , which criteria influence pedestrians choice for a particular route ?this is essential for reproducing route choice in computer models and is difficult due to the many underlying subjective influences on this choice .the manner by which pedestrians choose their way has a direct influence not only on the overall evacuation time but also on the average time pedestrians spend in a jam . in this paperwe restrict ourselves to the case where pedestrians all have the same motivation , to leave the facility . the approaches of modelling pedestrians motion fall into two main groups : microscopic and macroscopic models .microscopic models are further categorized in spatially discrete ( e.g. cellular automata ) and spatially continuous models ( e.g. social force model , generalized centrifugal force model ) . for a detailed overviewwe refer to .ca models use floor fields ( static and/or dynamic ) to direct pedestrians to a destination point .route choice in continuous models can be achieved by means of a network which consists of a set of destination points .this type of way finding is known as graph - based routing .the destination points can be pre - determined ( exits for instance ) or adjustable ( crossings , turning point at the end of a corridor for instance ) .the minimal network is usually a a visibility graph ( see for more details ) which ensures that any location on the facility is within the visibility range of at least one node .the initial network can be refined by adding more adjustable points converging to ca .this graph can be extended to a navigation graph by adding more adjustable points .the generation of such graph is a complex process , some efficient methods are presented in .once built the shortest path is usually determined using well established algorithms such as dijkstra or floyd warshall .such networks are widely spread in motion planning by robots as well .the intrinsic behaviour of humans in the case of an evacuation is generally to follow the seemingly ( self estimated ) quickest path .this is indeed a subjective notion as it depends on some prerequisites , e.g. whether or not the pedestrian is familiar with the facility .the modelling of the quickest path is achieved by systematically avoiding congestions . in cathis is achieved by means of dynamic floor fields where pedestrians moving increase the probability of using that path thereby making it more attractive for other pedestrians .this implicitly leads to congestion avoidance .the density in front of the moving pedestrians within their sight range is also considered as well as the payload at exits ; other approaches include navigation fields .continuous models usually optimize the travel time in the constructed network . in , pedestrians minimize their travel time by solving the hamilton - jacobi - bellman equation yielding to the optimal pedestrians path at each time step .a combination of a graph - based routing with ca is presented in where the fastest path for pedestrians is computed using a heuristic algorithm . in this contributionan event driven way finding in a graph - based structure is introduced .the approach is based on an observation principle , pedestrians observe their environment and take their final decision based on the obtained data . with this observationthe quickest path is achieved .the modelled strategies are the local shortest path ( lsp ) , the global shortest path ( gsp ) , the local shortest combined with the quickest path ( lsq ) and finally the global shortest combined with the quickest path ( gsq ) .an important point less discussed is given by the criticality of an evacuation process , first and foremost the meaning of criticality for an evacuation process .evaluation criteria like the building itself , the population size and the initial distribution of the evacuees , the evacuation time are discussed in .more individual criteria like the individual travel time and waiting time are investigated in .the most used criteria are the overall evacuation time and a visualisation of the evacuation process at specific times . in this paperwe elaborate other criteria to address the criticality of an evacuation simulation .the analysed criteria are the individual time spent in jam , the jam size evolution over time and the total jam size defined as the area under the jam size evolution .we give a special credit to the time pedestrians spend in jam as well as the jam size .this is of particular importance . in the case of the hermes project , there are congestions that vary ( in place and size ) depending on the type of people attending the events .we try to reproduce this observed phenomenon for the forecast of the evacuation dynamic . in order to achieve this, route choice has to be individually modelled .those events involve pedestrians familiar and unfamiliar with the facility .the four previously mentioned strategies are proposed to reproduce their route choice .their influences on the previous mentioned criteria are investigated .the modelling approaches are presented in the second section .the third section describes the evacuation assessment criteria .the analysis of the results including simulation , distribution of the evacuation time and distribution of the individual times spent in jam using different initial conditions are discussed in the fourth section .the framework used for describing pedestrian traffic can be divided in a three - tiers structure .one distinguishes between the strategic , the tactical and the operational level . in our model the operational level of the pedestrian walking is described by the generalized centrifugal force model which operates in continuous space . in this model pedestrians are described by ellipses .the semi - axes of the ellipses are velocity dependent , faster ellipses need more space in the moving direction .the fundamental diagram is reproduced by the model at corridors making it adequate for the analysis presented here . in this paperwe focus on the strategic level only , i.e. the pedestrians are solely given the next destination point , which is the next intermediate destination for the self estimated optimal route .this is also the direction of the driven force .we simulate pedestrians that are familiar with the facility and pedestrians unfamiliar with the facility .for those two groups we identify criteria to reproduce their route choice .those criteria are the local / global shortest path and the quickest path .the dynamic change of the strategies is modelled .this emulates the internal state of the pedestrians and the strategies are subject to change during the evacuation process .in addition one of the challenges consists of finding a good balance between the number of parameters and the numbers of criteria .the model should be as simple as possible with the parameter space kept as small as possible while considering as many criteria as required .this is important for model understanding and stability . in the frameworkused here , pedestrians move from one decision area to the next one .the decision areas are connected with nodes , which will be interchangeable with destination points .a decision area is a place where the pedestrian decides which way to go or change their current destination . in this worka decision area is an abstraction for rooms and corridors , in addition we restrict ourselves to the case where the destination points are exits and corridors end .[ fig : decision_area ] illustrates this principle and is a mapping of the facility presented in fig .[ fig : reference_selection ] .the pedestrians will be moving from the decision area 1 to the decision area 2 .the two areas are connected with one node .the network is automatically generated from the facility based on the inter - visibility of the exits .the euclidean distance between the nodes is used as weights .the most straight - forward routing approach is the local shortest path .once a node in the network is reached , the local nearest node is chosen as next destination .the global shortest path is determined by running the floyd warshall algorithm with path reconstruction on the built graph network .the runtime of is not an issue since we only have a small amount of nodes . from every node on the graph, the global shortest path to reach the outside can be determined .pedestrians familiar with the facility have the global map i.e. the constructed network and may approximate the global shortest path to their final destination independently of their current location .they have a better analysis possibility of the current situation .other pedestrians without any global information choose the local shortest path .in contrast to the shortest path , the quickest path is dynamic and changes with time throughout the simulation .the business logic of the routing algorithm is shown in fig .[ fig : quickest_path ] .the main events used in this routing algorithm to redirect pedestrians are the entering a new room and the identification of a jam situation .the pedestrians are first routed using the shortest path , global or local depending on theirs affiliations .the key elements of a quickest path routing approach are the estimation of the travel time , the estimation of the gain and an assessment of this gain .three functions are developped in this paper to model those key elements .we first define four values which will be used throughout this section .re - routing time + the re - routing time for a pedestrian with position is the time where one of the following conditions holds : + \ : \ : \vee \ : \ :\| \vec{x}(t_r ) - \vec{n_i}\| \leq d_{min } \label{eq : events } $ ] where is the threshold jam velocity , the patience time and the minimal distance to the node to consider it as reached .[ def : event ] reference pedestrian + let and be two pedestrians with positions and . is a reference pedestrian to with respect to the node of the graph and is defined as : + , where is the jamming queue at the node .if more than one pedestrian satisfy the condition , one is randomly chosen .[ def : reference ] jamming queue + let be a node in the navigation graph , the jamming queue at the node is defined as + where is the threshold jam velocity .[ def : queue ] visibility range + represents the set of all nodes within the visibility range of the pedestrian considering the actual location in the facility and other pedestrians .it is determined using the algorithm [ alg : visibility ] .[ def : visibility ] with 2 decisions areas and 4 nodes.,width=302 ] will select , and . has no clearance of the current situation and will not select any . selects and . will only select ., width=302 ] at any time ( see definition in [ def : event ] ) during the simulation , a new orientation process is started for the pedestrian .a reference pedestrian ( see definition [ def : reference ] ) is selected from the queue ( see definition [ def : queue ] ) and observed during an observation time . at the end of the observation , the expected travel time via all nodes in the visibility range ( see definition [ def : visibility ] ) is approximated using eq .[ eq : time ] . is the average velocity of the reference pedestrian over the observation time defined by eq .[ eq : observation ] . is randomly chosen between 1 and 3 seconds , the minimal distance is set to 0.20 and the minimal jam velocity is 0.2 .the estimated travel time is converted to a gain using eq .[ eq : gain ] . the cost benefit analysis ( cba ) function defined in eq .[ eq : cba ] determines whether it is worth changing the current destination . and are the gains calculated with eq .[ eq : gain ] . is always the current destination and the other evaluated alternatives .the benefit returned should be greater than a threshold ( ) in order for the pedestrian to consider the change .the thresholds taken here are 0.20 for familiar and 0.15 for unfamiliar pedestrians . when a pedestrian is caught in a jam for a period ( patience time ) which varies depending on the pedestrian , he / she looks for alternatives in the decision area and in the sight range as described in fig .[ fig : reference_selection ] .the initial value for is 10 seconds .it is increased by 1 second with any unsuccessful attempt to escape the jam .the value is kept until the room is changed .this amortizes the number of routes changes in the same room .the process of selecting a reference pedestrian is explained in fig .[ fig : reference_selection ] .the pedestrian has entered a new room .the pedestrians , and have identified a jam situation .pedestrian selects the reference pedestrians , and for the exits , and respectively .the pedestrian has a restricted visibility and will have only as reference .there is no possibility for to change .the references selection is based on the euclidean distance and visibility range . the visibility is implemented by drawing a line from the concerned pedestrian to the pedestrians in the queue , there should not be any intersections with other pedestrians or walls in the room .it is important to mention here , that the queue size does not play a major role , more important is the processing speed of the queue . *input : * pedestrian + * output : * [ alg : visibility ] unlike other algorithms , the approach presented here is not specialized to a particular case ( asymmetric exits choice for instance ) , i.e. not bounded to the geometry .it is also not dependent on the initial distribution of the pedestrians .usually evacuation processes are assessed with a visual proof and evacuation time within a feasible range .one less discussed question is the criticality of an evacuation process .the state of an evacuation can be critical for a certain group of the population , aged persons for example , but rather harmless for a different group .also the same results can be interpreted differently depending on the surrounding conditions . in this sectionwe address three criteria to assess an evacuation scenario .+ the evacuation time is given by the last person leaving the facility .another definition can be a clearance of the building up to 95% of the occupants .we consider the former definition here . a visual assessment of the evacuation dynamics for a simple scenario after 60 seconds is given by fig .[ fig : evacuation_time_demo ] .the colour of the pedestrians is correlated to their current velocity .red means slow and represents congestions areas .green corresponds to the maximum desired velocity of the pedestrians .the desired velocities are gaussian distributed with mean and standard deviation .the evacuation times distribution is presented in fig .[ fig : evacuation_time_200_peds ] .the smallest variance is achieved by the gsp .this is quite normal as the gsp remains the same for all pedestrians independent of their initial positions .the largest variance is given by the lsp .the dynamics brought by the quickest path leads not only to a reduction of the overall evacuation time and a reduction of the width but also to realistic shapes in the simulation . up to nowlittle importance has been brought to jam analysis itself i.e. how long pedestrian stay in jam .this is strongly coupled with the implemented routing strategy .the keyword jam is unfortunately not well defined in pedestrian dynamics .a rather crude definition would be to have an absolute zero velocity over a minimal time interval .the minimal time interval is needed to avoid very short velocity reduction at sharp turn for instance .we consider pedestrians moving at a speed lower than for a period of at least 10 seconds as being in a jam situation .the total time in jam is recorded for each pedestrian and a distribution of the recorded times is calculated .the jamming time distribution for the simulation scenario given by fig .[ fig : evacuation_time_demo ] is presented in fig .[ fig : jamming_time_distribution_200_peds ] .as expected the width of the distribution is smaller for the quickest path .the jam size and its evolution can not directly be derived from the evacuation time and/or the individual times spent in jam .it has to be analysed separately and for each exit individually .short - lived jams are rather uncritical , the same holds for short - sized jams .we are particularly interested in big jams with a long lifetime. they can be dangerous and should be checked against the characteristics of the evacuees for instance their ages , but also against the environmental conditions , the temperature for instance .the jam size is calculated by summing up the effective areas occupied by the pedestrians in jam at each time step .this is easily done by summing up the ellipses areas representing the pedestrians in the gcfm .another method could be to build the envelope of the pedestrians and calculate the area of the resulting polygon .[ fig : jam_evolution_initial_distribution ] shows a straight forward example of a jam size analysis scenario after the first time step ( 0.01 second ) .the colour corresponds to the states .red pedestrians are waiting for a possibility to move .other pedestrians are already moving .the initial density is on average 1 .the corresponding average jam areas are presented in table [ tab : areas_demo ] .the values for the global and local shortest path are the same .there as some difference in their corresponding combination with the quickest path . at a first look the values for the different routing strategies do not differ much from each other , but one has to consider the difference in the evacuation time , which is more than one minute in this case . there is a distribution of the overall load from the congested exits to the non congested ones .+ the jam size evolution at the different exits using the quickest path is shown in fig .[ fig : jam_size_evolution_initial_distribution ] .there is a sharp increase at the beginning of the simulation .peaks of approximately 150 are reached at all exits. the values then decrease with a slope correlated with the routing strategy used .the slope is constant with the static strategies .the quickest path shows a less constant slope due to the fact that some pedestrians are changing their destinations ..average jam size at different exits of fig .[ fig : jam_size_initial_distribution ] for the four strategies . [ cols="^,^,^,^,^ " , ] [ tab : areas_demo ]the strategies discussed in the previous section are tested on the structure described in fig .[ fig : initial_distribution ] with different initial distributions .the simulation area is a simplified model of a section of the esprit arena in dsseldorf , germany which holds up to 50.000 spectators .the room r1 to r4 are the grandstand with dimensions 10 x 20 .the tunnels are 2.40 wide and 5 long .the exits are 1.20 wide .the rooms r4 and r6 are 10 wide .the criteria used to evaluate the simulation results are presented in the previous section . in the first simulation case we are interested in the evacuation of a single block of the stadium .[ fig : arena_single ] shows the initial homogeneous distribution of this test with 250 pedestrians .the dynamic of the evacuation after 60 seconds for the four strategies is presented in fig .[ fig : dynamic_single ] . with the shortest path only two exitsare effectively used whereas the quickest path approach takes advantage of fours exits leading to a faster evacuation .the results of the evacuation time analysis are summarized in fig .[ fig : evac_time_single ] .1000 runs are performed for the evaluation .the dynamic brought by the quickest path leads to a faster evacuation , on average 1 minute faster . as depicted in fig .[ fig : dynamic_single ] other ( less congested or free ) exits are used , as will be expected in a real evacuation scenario .there is not much difference between the results given by the global and the local shortest path , this is due to the shape of the investigated facility .the second value analysed is the individual distributions of time spent in jam .the results are presented in fig .[ fig : jam_time_single ] . as expected the results are correlated to the evacuation time . with the quickest path, pedestrians spend less time in jam , 30 seconds on average . this time is almost tripled using the static strategies .[ fig : jam_evolution_single ] shows the evolution of the jam size at different exits .exits without congestions have been left out .the static strategies shows five long lasting jams .the quickest path alternatives shows more jams but short lived .the later ones are less critical .the total jam size distribution is taken from fig .[ fig:250_single_jam_distribution ] .the average jam size for the static strategies is 11 and 9 for the quickest path .+ + the second test consists of the simulation of the evacuation of the complete facility with 1000 pedestrians .the initial distribution is shown in fig .[ fig : arena_complete ] .a snapshot of the simulation after 60 seconds is shown in fig .[ fig : dynamic_complete ] .the results of the jamming time and the evacuation time are summarized in fig .[ fig : evac_time_complete ] and fig .[ fig : jam_time_complete ] respectively .there is no much difference between the mean values of the distributions as in the previous case .this is due to the highly congested situation .there is no such quickest path in this case and most of the pedestrians just have to follow the flow .those results are confirmed by the time in jam distribution .the total jam size distribution is taken from fig .[ fig:1000_complete_jam_distribution ] .the mean jam sizes are 46 for the static and 40 for the dynamic strategies. + in the third simulation scenario , the response of the system to a disturbance is simulated .the exits e2 and e8 ( see fig .[ fig : arena_single ] ) are broken and can not longer be used ( see fig .[ fig : arena_complete ] ) .the results of evacuation time and time in jam are presented in fig .[ fig : evac_time_disturbance ] and fig .[ fig : jam_time_disturbance ] respectively .there is now a certain asymmetry in the escape route scheme and this is reproduced by the modelling approach .the quickest path lead to a faster evacuation and to less time in jam .this behaviour is also the expected one .the total jam size distribution is taken from fig .[ fig:1000_complete_jam_distribution_dist ] .all mean values are lower than the previous case without disturbance .the approach presented in this paper offers the possibility to assess a given building structure considering the manifold of possible route choices .four strategies have been presented in a combination of quickest and shortest path .the quickest path approach , which is based on an observation principle , is not sensitive to initial distribution of pedestrians or special topologies like symmetric exits , making it quite general .furthermore it leads to a more realistic dynamics in the evacuation simulation .in addition criteria to assess the criticality of an evacuation simulation have been elaborated .we investigated the evacuation time distribution , the time in jam distribution and average jam size of pedestrians using a quickest and shortest path routing approaches in a graph - based way finding algorithm .the approaches have been tested on different scenarios with different complexities involving symmetric , asymmetric or even broken escape routes .similarities between the distributions , their meanings and impacts on different evacuation scenarios have been quantitatively analysed .this work has been performed within the program `` research for civil security '' in the field protecting and saving human life " funded by the german government , federal ministry of education and research ( bmbf ) .the project is granted under the grant - nr .: 13n9952 .a. kirchner , h. klpfel , k. nishinari , a. schadschneider , m. schreckenberg , simulation of competitive egress behavior : comparison with aircraft evacuation data , physica a 324 ( 2003 ) 689697 .http://dx.doi.org/10.1016/s0378-4371(03)00076-1 [ ] .v. j. blue , j. l. adler , cellular automata microsimulation for modeling bidirectional pedestrian walkways , transportation research part b 35 ( 2001 ) 293312 . http://dx.doi.org/10.1016/s0191-2615(99)00052-1 [ ] .d. helbing , p. molnr , http://link.aps.org/abstract/pre/v51/p4282[social force model for pedestrian dynamics ] , phys .e 51 ( 1995 ) 42824286 . http://dx.doi.org/10.1103/physreve.51.4282 [ ] .m. chraibi , a. seyfried , a. sachdschneider , http://pre.aps.org/abstract/pre/v82/i4/e046111[generalized centrifugal force model for pedestrian dynamics ] , physical review e 82 ( 2010 ) 046111. http://dx.doi.org/10.1103/physreve.82.046111 [ ] .http://pre.aps.org/abstract/pre/v82/i4/e046111 a. schadschneider , w. klingsch , h. klpfel , t. kretz , c. rogsch , a. seyfried , encyclopedia of complexity and system science , vol . 5 , springer , berlin , heidelberg , 2009 , chevacuation dynamics : empirical results , modeling and applications , pp . 31423176 .j. dijkstra , h. timmermans , towards a multi - agent model for visualizing simulated user behavior to support the assessment of design performance , automation in construction 11 ( 2 ) ( 2002 ) 135145 .http://dx.doi.org/doi:10.1016/s0926-5805(00)00093-5 [ ] .m. asano , t. iryo , m. kuwahara , microscopic pedestrian simulation model combined with a tactical model for route choice behaviour , transportation research part c : emerging technologiesarticle in press , corrected proof .m. hcker , v. berkhahn , a. kneidl , a. borrmann , w. klein , graph - based approaches for simulating pedestrian dynamics in building models , in : 8th european conference on product & process modelling ( ecppm ) , university college cork , cork , ireland , 2010 , http://zuse.ucc.ie / ecppm/. h. alt , e. welzl , http://dx.doi.org/10.1007/bf01928918[visibility graphs and obstacle - avoiding shortest paths ] , mathematical methods of operations research 32 ( 1988 ) 145164 , 10.1007/bf01928918 .http://dx.doi.org/10.1007/bf01928918 e. w. dijkstra , http://gdzdoc.sub.uni-goettingen.de/sub/digbib/loader?did=d196313[a note on two problems in connexion with graphs . ] , numerische mathematik 1 ( 1959 ) 269271 .http://gdzdoc.sub.uni - goettingen.de / sub / digbib / loader?d% id = d196313[http://gdzdoc.sub.uni - goettingen.de / sub / digbib / loader?d% id = d196313 ] a. kirchner , a. schadschneider , simulation of evacuation processes using a bionics - inspired cellular automaton model for pedestrian dynamics , physica a 312 ( 2002 ) 260276 . http://dx.doi.org/10.1016/s0378-4371(02)00857-9 [ ] .e. s. kirik , t. b. yurgelyan , d. v. krouglov , the shortest time and/or the shortest path strategies in a ca ff pedestrian dynamics model , journal of siberian federal university .mathematics & physics 2 ( 3 ) ( 2009 ) 271278 .z. fang , q. li , q. li , l. d. han , d. wang , http://www.sciencedirect.com / science / article / b6v23 - 526ms05 - 1/2/a4fcec1% 4fa891398b43d0f03c3a554e3[a proposed pedestrian waiting - time model for improving space - time use efficiency in stadium evacuation scenarios ] , building and environment in press , accepted manuscript ( 2011 ) . http://dx.doi.org/doi : 10.1016/j.buildenv.2011.02.005 [ ] .http://www.sciencedirect.com / science / article / b6v23 - 526m% s05 - 1/2/a4fcec14fa891398b43d0f03c3a554e3[http://www.sciencedirect.com / science / article / b6v23 - 526m% s05 - 1/2/a4fcec14fa891398b43d0f03c3a554e3 ] a. u. kemloh wagoum , a. seyfried , optimizing the evacuation time of pedestrians in a graph - based navigation , in : m. panda , u. charraraj ( eds . ) , developments in road transportation , macmillian publishers india ltd , 2010 , pp . 188196 .a. seyfried , a. portz , a. schadschneider , phase coexistence in congested states of pedestrian dynamics , in : s. bandini , s. manzoni , h. umeo , g. vizzari ( eds . ) , cellular automata , vol .6350 of lecture notes in computer science , 9th international conference on cellular automata for reseach and industry , acri 2010 ascoli piceno , italy , september 2010 , springer - verlag berlin heidelberg , 2010 .
|
this paper presents an event - driven way finding algorithm for pedestrians in an evacuation scenario , which operates on a graph - based structure . the motivation of each pedestrian is to leave the facility . the events used to redirect pedestrians include the identification of a jam situation and/or identification of a better route than the current . this study considers two types of pedestrians : familiar and unfamiliar with the facility . four strategies are modelled to cover those groups . the modelled strategies are the shortest path ( local and global ) ; they are combined with a quickest path approach , which is based on an observation principle . in the quickest path approach , pedestrians take their decisions based on the observed environment and are routed dynamically in the network using an appropriate cost benefit analysis function . the dynamic modelling of route choice with different strategies and types of pedestrians considers the manifold of influences which appears in the real system and raises questions about the criticality of an evacuation process . to address this question criteria are elaborated . the criteria we focus on in this contribution are the evacuation time , the individual times spent in jam , the jam size evolution and the overall jam size itself . the influences of the different strategies on those evaluation criteria are investigated . the sensibility of the system to disturbances ( e.g. broken escape route ) is also analysed . pedestrian dynamics , routing , quickest path , evacuation , jam , critical states .
|
dynamics of our world is governed and described by differential equations .realization of this startling fact was evaluated by newton as the most important discovery of his life . however , explicit analytical solutions of differential equations are the exception rather than the rule .this makes scientists develop special and approximate methods for the analysis of differential equations because every new step in understanding the properties of their solutions gives a further insight into a physical theory described by corresponding equations .thus , for example , the discovery of adiabatic invariants of the second order differential equation with slowly varying parameters was an important step in the development of quantum theory .the existence of one more remarkable property of this equation , the so - called geometric phase , was noticed only 80 years later .historical aspects of the development of ideas related to the understanding of the properties of solutions of differential equations with slowly varying parameters as well as their theoretical , experimental and applied aspects one can find in many reviews and books ( see , for example , ) . the quantity considered in the present paper , the geometric phase , is also known as the topological or nonholonomic phase and often associated with the names of its pioneers : rytov , vladimirskii , pancharatnam , berry , hannay , less frequently with ishlinskii ( who gave the explanation of systematic gyroscope bias error after a long voyage ) , and others . in our workwe consider this concept at the classical , non - quantum level and in what follows call it the geometric or hannay phase. the geometric phase can occur both in quantum and in classical systems .this is not astonishing in view of the actually identical second order differential equations which are the time - independent schr equation and the newton ( or hamilton ) equation for the harmonic oscillator with a linear restoring force .the analogy between quantum and classical phenomena is clearly seen , for example , when one compares the classical phenomenon of parametric resonance and the band character of the spectrum of quantum particle in a stationary periodic field : both of the phenomena are described by the hill equation .this analogy was also repeatedly used in the study and comparison of the adiabatic dynamics of classical systems and the wkb approximation of quantum mechanics . in mathematical terms , the geometric phase is a correction to the dynamical phase for the harmonic solution of a linear differential equation with the broken time - reversal invariance or , in other words , for the solution which describes the vibrational mode of motion of dynamical systems with slowly varying parameters . in the present work we give an elementary example of mechanical system illustrating the physical meaning of hamiltonian ( [ hamiltonian1 ] ) and , in this way , the possible range of applicability of hannay s results .this mechanical system is a plane mathematical pendulum with the slowly varying mass and string length , and with the suspension point moving at a slowly varying speed .the fact of canonical equivalence between the considered pendulum and a damped harmonic oscillator is surprising from the physical point of view and trivial , at the same time , from mathematical point .we discuss this duality at section [ discuss ] . a complex form of gho hamilton function is presented in appendix .the simplest second order equation which can be a demonstrative example of the existence of geometric phase in classical adiabatic dynamics is the equation of motion of the generalized harmonic oscillator ( gho ) .the importance of this example is confirmed by the fact that scientists after hannay often returned to this equation using different methods for its analysis .the hamiltonian of the gho is given by where and are the canonically conjugate coordinate and momentum ; , , and are the parameters of the generalized oscillator . when the parameters , and are constant , the energy of the system is a constant of motion . for the values of , and satisfying the inequality , solutions of the hamilton equations take the form : if the parameters change slowly , see eq .( [ epsilon ] ) , the motion of the oscillator can be approximately regarded as the periodic one of the form ( [ solutions1 ] ) with the slowly varying amplitude and phase both of which should be determined . in this case , the energy of the system is not conserved , but there is a new approximate conserved quantity , the adiabatic invariant , which remains constant with ( non - analytic ) exponential accuracy , where is some constant determined by analytical properties of varying parameters .the phase of the oscillator is an ` almost linear ' function of time .it was shown in the works by hannay and berry that the phase of the oscillator can be represented as the sum of two quantities , , where the dynamic and the geometric phases are : the independence of the geometric phase of time follows from the last equation of ( [ d&gphases ] ) , provided the adiabatic condition holds , and explains the name of this phase .the dependence of the phase on the path of integration is associated with the concept of anholonomy . reversing the direction of integration alongthe contour changes the sign of the geometric phase , and when the parameter , which violates the time - reversal invariance of the hamiltonian ( [ hamiltonian1 ] ) , becomes zero , the geometric phase vanishes as well .if the line of variation of the parameters , and is the closed curve , then the integral corresponding to the geometric phase can be converted by stokes theorem to the surface integral which is also independent of the time of parameter change : where and similar expressions are projections of oriented surface elements on relevant directions .this is the essence of berry and hannay s results in the application to the classical mechanics .the result ( [ d&gphases ] ) can be obtained by averaging over the ` fast ' variable without appealing to the action - angle variables , as it was originally made in hannay s work . in more detailsthis procedure is as follows : it is necessary to substitute the expressions ( [ solutions1 ] ) into the hamiltonian equations of motion , bearing in mind that the parameters , and are functions of time . solving the obtained equations with respect to and and averaging them over the period of motion ,one arrives at simple differential equations for , and that have the solutions given by the expressions ( [ adinvar ] ) and ( [ d&gphases ] ). another alternative method of obtaining the results ( [ adinvar ] ) and ( [ d&gphases ] ) one can find in appendix b. note , that the geometric phase stems from the non - potential ( vortex ) nature of the differential form , since in general case ; in the theory of differential forms such forms are called inexact . the hannay phase can not be calculated only on the basis of initial and final states of the oscillator and depends on the path connecting the start and end points - states of the system in the parameter space .for the existence of the geometric phase ( see eq .( [ d&gphases ] ) ) , the most significant factor is the lack of -invariance of hamiltonian ( [ hamiltonian1 ] ) . in spite of the simplicity ,this result had a great influence on the subsequent development of the theory of dynamical systems and found numerous applications . however , until now the question , which systems can be described by the hamiltonian of the gho is still open . in the work it was shown that the hamiltonian ( [ hamiltonian1 ] ) is canonically equivalent to the hamiltonian of the equation of damped harmonic oscillator .the result , on the one hand , is a bit surprising but , on the other hand , leaves a feeling of dissatisfaction .in particular , the existence of other simple counterparts of the gho among well - known systems of mechanical or other origin seems to be natural .let us consider the motion of simple plane linearized mathematical pendulum with the suspension point moving with a small acceleration along the vertical axis of the oscillation plane , see fig.[fig2 ] .the speed of the suspension point as well as two other pendulum parameters the mass and the length of the string are supposed to be slowly changing functions of time with the characteristic scale much greater than the period of harmonic oscillations of the pendulum : plane with the suspension point moving along the axis .,height=226 ] for small oscillations , the coordinates of the point are : here is the coordinate of the suspension point of the pendulum and is the deflection angle of the pendulum .the lagrangian of the pendulum up to the terms of the first order in can be written as where . using the standard legendre transformation and introducing the generalized momentum instead of generalized velocity , we find the hamilton function , thus , we directly arrive at the hamilton ( [ hamiltonian1 ] ) of the gho , where the parameters of the hamiltonian are given by : the second term in ( [ hamiltonian2 ] ) arises due to the transition to the moving frame of reference and the lack of invariance under the spatial transformation .( such term does not appear when the suspension point of the pendulum moves only along the horizontal axis . )the renormalization of gravity in the third term of ( [ hamiltonian2 ] ) is related to the action of forces emerging in the non - inertial frame of reference . substituting ( [ param1 ] ) into ( [ adinvar ] ) and ( [ gphase ] ), we immediately obtain the adiabatic invariant , and both ( the dynamical and geometric ) phases of the pendulum : we would like to note here that the parameters and of the gho play different role in determining the geometric phase . indeed , the expression for the geometric phase in ( [ d&gphases ] ) does not contain the derivative of with respect to . by this reason we should take into account the small parameter in and in eqs .( [ lagrangian ] ) and ( [ hamiltonian2 ] ) , and neglect terms with such derivatives in and , if there are any . in view of the equality , the expression for the dynamical phase also contains the parameter and , in this way , the term with , which should obviously be included into the geometric phase . as can be seen from eqs .( [ hamiltonian2 ] ) and ( [ d&gphasespend ] ) , the variation of parameter is crucial for the occurrence of the geometric phase . if the suspension point of pendulum moves with a constant speed , the geometric phase .the second term in ( [ hamiltonian2 ] ) is responsible for the absence of time reversal symmetry of the system .variations of only and can not give rise to the geometric phase .the variation of in the second term of eq .( [ hamiltonian2 ] ) is not enough for engendering the geometric phase because the terms proportional to the time derivative of in the second and third terms of ( [ hamiltonian2 ] ) cancel each other .thus , we have shown the canonical equivalence of the simple linearized mathematical pendulum and the gho with adiabatically varying parameters .in the work it was shown that the hamiltonian ( [ hamiltonian1 ] ) is canonically equivalent to the hamiltonian of the equation for the dissipative harmonic oscillator : indeed , it is easy to check that ( [ eqdampedho ] ) follows from the euler - lagrange equation along with the generalized caldirola - kanai lagrangian , } ( { \dot{q}}^2-{\omega}^2 q^2 ) .\label{caldirolakanailag}\ ] ] the lagrangian ( [ caldirolakanailag ] ) makes it possible to calculate the hamiltonian corresponding to eq .( [ eqdampedho ] ) : }+ \frac{m{\omega}^2 q^2}{2}\exp { \left[2\int^t \lambda(s)ds \right]}. \label{caldirolakanaiham}\ ] ] the equations known from the theory of canonical transformations , establish the canonical equivalence of two different ways of describing the same hamiltonian system . the generating function } , \label{generatingf}\ ] ] and the expression ( [ canonictrans ] ) , after the identification of the parameters allow us to verify the equivalence of ( [ hamiltonian1 ] ) and ( [ caldirolakanaiham ] ) by direct calculation .thus , the equivalence of the plane pendulum to the gho , on the one hand , and the gho to the damped harmonic oscillator ( dho ) , on the other hand , yield the equivalence of the plane pendulum to the dho .the last conclusion can be verified directly by the substitution };\,\ , m = ml^2,\ \lambda = v / l,\ { \omega}^2=(gl+v^2+v\dot{l})l^{-2 } , \label{correspondencedo&pendulum2}\ ] ] into the lagrangian ( [ caldirolakanailag ] ) . as a resultwe obtain expression ( [ lagrangian ] ) . to obtain caldirola - kanai lagrangian ( [ caldirolakanailag ] ) of the damped oscillator we have to put } \label{correspondencedo&pendulum1}\end{aligned}\ ] ] into ( [ lagrangian ] ) and find by eliminating the term with .this results in .thus , we have shown that the planar pendulum ( [ lagrangian ] ) is canonically equivalent to the dissipative dho ( [ eqdampedho ] ) , ( [ caldirolakanailag ] ) .the equivalence holds for the systems with slowly varying parameters ( not only for the constant ones ) .another complex form of the gho hamiltonian , which is also canonically equivalent to the original gho hamiltonian ( [ hamiltonian1 ] ) is presented in appendix a.let us expose two different points of view on the subject under discussion . herewe call them `` physical '' and `` mathematical '' .the physicists believe that dissipative forces are outside of the scope of applicability of variational principles of analytical mechanics .the general point of view on applicability of this principle may be represented as follows ( see , e.g. , ) .newtonian vector mechanics describes the motion of mechanical systems under the action of forces applied to them .newton s approach does not limit the nature of the forces , which are usually divided into potential and dissipative .the lagrange - hamilton variational mechanics describes the motion of mechanical systems under the action of only potential forces . from this pointthe existence of lagrangian for dissipative system and its equivalence to non - dissipative one is something extraordinary .nevertheless the caldirola - kanai lagrangian describing the dissipative oscillator is not an exceptional example of ` exotic ' systems ( with the equation of motion containing the time derivatives of generalized coordinates ) .the caldirola - kanai lagrangian also describes systems with an arbitrary potential instead of in ( [ caldirolakanailag ] ) .other example of the equation of motion and the lagrange function for the oscillator with quadratic dependence of ` friction ' on velocity reads : .\label{fipoitsquar}\ ] ] one more example of the systems with ` friction ' is the nonlinear hirota oscillator playing an extremely important role in the study of nonlinear evolution equations and dynamical systems .its equation of motion and the lagrange function are so , we have too many examples of equations containing the time derivatives of generalized coordinates and we should look for some explanation for this phenomenon . as a matter of fact this `` strange '' situation have been explained more than a century ago . the mathematical approach formulates the inverse problem of the calculus of variations as the problem of finding conditions , ensuring that a given system of differential equations of motion coincides with the system of euler - lagrange equations of an integral variational functional .this problem ( in recent years also known as the sonin - douglas problem ) was first considered by sonin for one second order ordinary differential equation in 1886 ( the almost forgotten paper ) .he proved that every second order equation has a lagrangian .then the same idea and approach appeared in 1894 .later , it was shown that for one - dimensional systems in context of the inverse problem of lagrangian dynamics and non - uniqueness of lagrangian there are infinitely many lagrangians which result in the same trajectory in the configuration space for any second - order differential equation .they are not , of course , canonically equivalent in the usual sense , since they may very well give different second order equations for and thus different orbits in the phase space .different lagrangian descriptions of the same system engender different ` energies ' , .it turns out that , depending on a particular choice of coordinates for its lagrangian description , a given dynamical system may be regarded either as dissipative or not . for ,the lagrangian description is generically unique , if there is any .more exactly the sonin result is as follow .any second - order differential equation , can be presented in the lagrangian form with the jacobi last multiplier the multiplier satisfies the following equation after finding multiplier , the lagrangian can be get from the equation it is easy to verify that equations ( [ eqdampedho ] ) ( with constant parameters ) , ( [ fipoitsquareq ] ) and ( [ hirotalagr ] ) have the following multipliers and correspondingly .so , in this way , from the mathematical viewpoint of the lagrangian and hamiltonian approaches , there is nothing paradoxical in the fact that the same hamiltonian system , the generalized harmonic oscillator in our case , is canonically equivalent to the two different systems : the usual plane mathematical pendulum and the damped harmonic oscillator .nevertheless , in some physical scientific circles there is still a popular belief that the lagrangian and hamiltonian approaches are not suitable for the consideration of dissipative systems .unfortunately , this point of view is deeply rooted and many authors keep on inventing new formulations of the principle of the least action ( e.g. , ) .we studied the motion of planar linearized mathematical pendulum with slowly varying parameters ( the mass , the length of the suspension string and the speed of the suspension point ) .the hamiltonian of the pendulum was cast into the form of the hamiltonian for the gho .thus , we give the example of the simple hamiltonian system described by the equations of the gho .the paradoxical feature of the result is that the same hamiltonian system , the gho in this case , can be canonically equivalent to the two different systems , to the planar mathematical pendulum ( the hamiltonian system ) and to the damped harmonic oscillator ( the dissipative system with time dependent lagrangian ) .this observation disputes the separation of dynamical systems into two classes of the dissipative and hamiltonian ones . in our opinionthe dividing line might be , for example , between the systems which are invariant and non - invariant with respect to the time reversal transformation . [ [ section ] ] another convenient form of hamiltonian ( [ hamiltonian1 ] ) is its complex form .it can be obtained after changing the variables and by and . \ .\label{q compl}\ ] ] the generating function of the transformation is get by integration of the equations and deduced from the differential identity characterizing a canonical transformation taking into account the relations ( [ q compl ] ) one gets the hamiltonian for the new conjugate variables is then obtained from the relation .its expression reads .\ ] ] one can verify that the hamilton equations indeed give correct equations for and the corresponding one for after substitution ( [ q compl ] ) in ( [ hameqs ] ) .conservation of the poisson brackets of the transformations ( [ q compl ] ) manifests about their canonicity .the multiplier of the term in eq .( [ hz ] ) gives once again the result ( [ d&gphases ] ) .[ [ section-1 ] ] - the other way to obtain the adiabatic invariant and phases is as follow .after presenting the term in the lagrangian ( [ lagrangian ] ) in the form and neglecting the total derivative , we get \frac{{\phi}^2}{2 } , \label{lagrangian2}\ ] ] where we can immediately identify the squared frequency , .\label{frequance}\ ] ] more rigorously , using eqs .( [ lagrangian2 ] ) or ( [ lagrangian ] ) , we should write the euler - lagrange equation and look for its solution in the form . neglecting quantities of the second order of , we obtain simple differential equations of the first order for and containing first derivatives of the parameters .the solutions of these equations yield ( [ action ] ) and ( [ d&gphasespend ] ) .20 f. wilczek , a. shapere , geometric phases in physics , world scientific , 1989 . s.i .vinitskii , v.l .derbov , v.m .dubovik , b.l .markovski , yu.p .stepanovskii , topological phases in quantum mechanics and polarization optics , sov .33(6 ) ( 1990 ) 403 - 428 .usatenko , g.p .provost , g. vallee , a comparative study of the hannay s angles associated with a damped harmonic oscillator and a generalized harmonic oscillator , j. phys .a : math . gen . 29( 1996 ) 2607 - 2610 .landau , e.m .lifshits , mechanics , butterworth - heinemann , 1976 . c. lanczos ,variational principles of mechanics , dover publications , inc .new york , 1986 .tarasov , quantum dissipative systems i. canonical quantization and quantum liouville equation , theoretical and mathematical physics 100 ( 1994 ) 402 - 417 . v.e .tarasov , quantum mechanics of non - hamiltonian and dissipative systems , elsevier science , 2008 .h. goldstein , classical mechanics , 3rd ed ., pearson education lim . , 2014 .r. hirota , exact n - soliton solution of nonlinear lumped self - dual network equations , j. phys .35 ( 1973 ) 289 - 294 .n. j. sonin , about determining maximal and minimal properties of plane curves , warsawskye universitetskye izvestiya ( 1886 ) ( 1 - 2 ) , 1 - 68 ( in russian ) ; english translation by r. matsyuk , lepage inst .archive , no . 1 ( 2012 ) , 1 - 42 .
|
we give an example of a simple mechanical system described by the generalized harmonic oscillator equation , which is a basic model in discussion of the adiabatic dynamics and geometric phase . this system is a linearized plane pendulum with the slowly varying mass and length of string and the suspension point moving at a slowly varying speed , the simplest system with broken -invariance . the paradoxical character of the presented results is that the same hamiltonian system , the generalized harmonic oscillator in our case , is canonically equivalent to two different systems : the usual plane mathematical pendulum and the damped harmonic oscillator . this once again supports the important mathematical conclusion , not widely accepted in physical community , of no difference between the dissipative and hamiltonian 1d systems , which stems from the sonin theorem that any newtonian second order differential equation with a friction of general nature may be presented in the form of the lagrange equation . adiabatic dynamics , geometric phase , lagrangian and hamiltonian mechanics , dissipative systems 01.55.+b , 03.65.vf , 45.20.jj , 45.30.+s , 45.50.dd
|
stochastic differential equation ( sde ) driven by a like is a basic model to describe time - varying physical and natural phenomena . theredo exist many situations where non - gaussianity of distributions of data increments , or of a residual sequence whenever available , is significant in small - time , making diffusion type models somewhat inappropriate to reflect reality .see as well as the enormous references therein , and also .this non - gaussianity characteristic may not be well modeled even by a diffusion with compound - poisson jumps as well , since jumps may be then sparse compared with sampling frequency , so that increments are approximately gaussian except for intervals containing jump - time points .this calls for a more tailor - made estimation procedure when the driving is of pure - jump type , for which the approximate gaussianity in small - time is no longer effective .it seems that the literature is far from being well developed , motivating our present study . in this paper, we consider a solution to the univariate markovian sde defined on an underlying complete filtered probability space with where : * the initial random variable is -measurable ; * the driving process is a pure - jump ( ) lvy process independent of and having the lvy - khintchine representation for and ( denotes the of ) ; * the trend coefficient and scale coefficient are assumed to be known except for the -dimensional parameter with and being bounded convex domains .we will assume that as the distribution weakly tends to the symmetric stable distribution with known index ( the assumption will be made rigorous in assumption [ a_j ] ) , and that the process is observed only at discrete but high - frequency time instants , , with nonrandom sampling step size as ; note that the index corresponds to the blumenthal - getoor activity index , see below .we are concerned here with estimation of , assuming that the true value does exist and mainly focusing on the so - called bounded - domain asymptotics : for a fixed terminal sampling time .this amounts to observing not a complete path but the time - discretized step process . \label{disc.x}\ ] ] due to the lack of a closed - form formula for the transition distribution , a feasible approach based on the _ genuine _ likelihood function is rarely available . in this paper, we will introduce a novel _ _ non - gaussian quasi - likelihood function _ _ , much extending the prototype mentioned in and . more specifically, we will provide a quasi - likelihood estimator such that is asymptotically mixed normally distributed under some conditions .this entails that the activity index determines the rate of convergence of estimating the trend parameter ; note that .most notably , unlike the case of diffusions ( cf . ) , we can estimate not only the scale parameter but also the trend parameter , with the explicit asymptotic distribution in hand . to prove the asymptotic mixed normality, we will take a doubly approximate procedure based on the euler - maruyama scheme combined with the stable approximation of for .our result seems to provide us with the first systematic methodology for estimating the possibly non - linear pure - jump lvy driven sde based on a non - gaussian quasi - likelihood .let us make a couple of remarks on our statistical model .first , the model is semiparametric in the sense that we do not completely specify the of , while supposing the parametric coefficients ; of course , the is an infinite - dimensional parameter , so that alone never determines the distribution . in estimation of , it seems desirable ( whenever possible ) to estimate with leaving the remaining parameters contained in as much as unknown .the proposed quasi - likelihood , termed as ( non - gaussian ) stable quasi - likelihood , will provide us with a widely applicable tool for this purpose , extending the preceding results on diffusion processes , where is a standard .in particular , it gives an estimator having much better asymptotic behavior compared with the previously studied gaussian maximum quasi - likelihood estimator , which is known to be inconsistent when the target sampling time period is fixed ; see for details , and also section [ sec_backgrounds ] for brief background .second , note that we have assumed from the very beginning that , that is , contains no gaussian factor .normally , the simultaneous presence of a non - degenerate diffusion part and a non - null jump part makes the parametric - estimation problem much more complicated .the recent papers and discussed usefulness of pure - jump models . although they are especially concerned with financial context , pure - jump processes should be useful for model building in many other application fields where non - gaussianity of time - varying data is of primary importance ; for example , signal processing ( detection , estimation , etc . ) , population dynamics , hydrology , radiophysics , turbulence , biological molecule movement , noise - contaminated biosignals , and so on . see also and for recent related works .finally , our model may be formally seen as a continuous - time analogue to the discrete - time model where are i.i.d .random variables . by making use of the small - time non - gaussian stable structure of ,our model setup enables us to formulate a flexible and unified estimation paradigm , which can not be shared with the discrete - time counterpart . in this context , let us make some general remarks on the high - frequency - sampling asymptotics . *the present bounded - domain asymptotics enables us to `` localize '' the event , sidestepping _ stability ( such as the ergodicity ) _ and _ moment - condition _ issues on , which is quite often inevitable for developing asymptotic theory for .instead , we need much more than the ( martingale ) central limit theorem with gaussian limit : a mixed normal limit theory for laq statistical experiments plays an essential role . fortunately , we have a very general tool , that is , jacod s characterization of conditionally gaussian martingales ( see and , and also section [ sec_localization ] ) , which in particular deal with the sde when is a pure - jump .* we have yet another benefit coming from the fine continuous - time model structure : as in the case of estimation of the diffusion coefficient of a diffusion type process ( cf . and ) , we may treat much more general model setup , say a semimartingale regression model , with keeping the asymptotic mixed normality result for both location and scale coefficients .see section [ sec_model.extensions ] for more details .* nevertheless , it is well - known that observed information corresponding to some parameters is stochastically bounded unless , thereby theoretically preventing us from estimating it consistently ; see , and and the the references therein , and also section [ sec_backgrounds ] . at the same time, we should note that there is no general relation between actual - time and model - time scales ; virtually , one may always set the terminal sampling time to be a fixed value , so that may represent one day , one month , one year , etc . as a matter of fact , our quasi - likelihood function does work even under the large - time asymptotics where .obviously , in that case the long - run characteristic of crucially affects the asymptotics . in section [ sec_ergodic ]we will give a brief exposition of this case under the ergodicity .here is a list of conventions and basic notation used throughout this paper . in what followswe will largely suppress the dependence on from the notations and , and denote by the family of the image measures of given by in , the skorokhod space of functions from to ; for brevity , we will use the symbol also for the true image measure of . for any process , denotes the increments , and we write for a function having two components , such as . for a variable , denotes the partial - differentiation operators with respect to the components of ( e.g. and ) ; given a function with , we write for the array of partial derivatives of dimension .the characteristic function of a random variable is denoted by . for any matrix let with denoting the transpose .we use for a generic positive constant which may vary at each appearance , and write when for every large enough . finally , the symbols and denote the convergences in -probability and in distribution , respectively .the rest of this paper is organized as follows .we begin in section [ sec_backgrounds ] with some background of our objective , and then describe our basic model setup in section [ sec_setup ] .the main results is presented in section [ sec_sql ] , followed by numerical experiments in section [ sec_simulations ] .section [ sec_proofs ] is devoted to the proof of the main results .for better clarifying the novelty of our result , let us first mention the gaussian quasi - likelihood estimation in case of the parametric ergodic diffusion model with true invariant distribution .the gaussian quasi - maximum likelihood estimator ( gqmle ) is defined to be any maximizer of the random function which comes from the `` fake '' small - time gaussian approximation of the transition probability : under appropriate conditions we have the asymptotic normality ( cf . , , and ) : \bigg ) .\label{gqmle_an}\end{aligned}\ ] ] we should note that the simple form , which works under the sampling - frequency condition , is just for simplicity of exposition , and incorporating the higher - order it - taylor expansions of the one - step conditional mean and variance into enables us to define an estimator having the same asymptotic normality as in . the asymptotic covariance matrix in is known to be asymptotically optimal .also well known in the literature is that , even when we may estimate in an asymptotically efficient manner as by making use of the variant of : where the drift coefficient is now a non - estimable nuisance element ( cf . , , and ) .here , the symbol `` '' stands for a -variate mixed normal distribution , see section [ sec_amn ] . obviously , andare already observed in case of the scaled wiener process with drift , where the gaussian quasi - likelihood becomes the genuine likelihood , so that the asymptotics of the mle becomes trivial ; then , formally reduces to . in this paper, we will show that the notion of the _ `` local - gauss '' _ contrast function can be extended to the _ `` local - non - gaussian - stable '' _ counterpart , which brings in an essentially more efficient estimator in the present case , where and ; indeed , our asymptotic mixed normality result given in theorem [ sqmle.iv_bda_thm ] will formally extend both and .nevertheless , it should be mentioned that the gqmle can be used even for the pure - jump lvy driven case .indeed , it turned out in the previous work that adopting the gaussian quasi - maximum likelihood estimator based on the local - gauss approximation leads to the asymptotic normality of the form for some explicit . in view of our result in section [ sec_amn ] ,the gqmle may then entail much efficiency loss .turning to estimation strategy other than the gqmle , we give some remarks .our quasi - likelihood has its origin in the maximum - likelihood estimation of the , where is a standard -stable associated with the characteristic function .then the increment is form i.i.d .array with common , and direct explicit computations should lead to the asymptotic normality and asymptotic efficiency of the rescaled maximum - likelihood estimator ; see and for details .note that we have the different features according to the value : * for , the noise part is dominant compared with the drift part , hence can be estimated more quickly than ; * for , the opposite can be said ; * for the critical case ( the cauchy case ) , the drift and the noise parts are of the same stochastic order .concerning the possibly non - linear sde , our main result reveals an extended asymptotic phenomenon for . for an explicit example , let us consider estimation of of a where is a normal - inverse gaussian , which is locally cauchy ( see ) .just for comparison with the gqmle , we set for some , so that , , and ; see ( also example [ j.ex_nig ] below ) for the details of nig es. then we have the following . 1 .the gqmle is asymptotically normal : where we may leave unknown while the value does affect the asymptotic covariance matrix .this can be deduced in a direct manner , following an analogous way to ; indeed , even for a general centered and standardized ( i.e. , ) , it readily follows from the fact for , and the lindeberg - feller theorem applied to the identity together with the delta method that 2 .let .the qmle based on the cauchy likelihood , which is a special case of the quasi - likelihood introduced in section [ sec_sql.heuristic ] , satisfies that as a matter of fact , the asymptotic normality holds even when with leaving unknown . in this casethe qmle is asymptotically efficient ( cf . and ) .thus , very different asymptotic behaviors are observed , the latter being much better . in section [ sec_sim.nig.j ]we will present several simulation results for the sde driven by a nig .[ rem_nig.ex ] contrary to the diffusion case , very little is known about asymptotic - efficiency phenomenon for the lvy driven sde when observing . for the classical lan ( local asymptotic normality )property results when is a , i.e. when and are constants , we refer to for several explicit case studies , and to for a general locally stable es .recently , proved the lamn ( local asymptotic mixed normality ) property about the drift parameter especially when is a given constant and the support of the is bounded .the asymptotic efficiency in the sense of hajk - le cam - jeganathan of our qmle is assured by their lamn result . just like the fact that the gqmle is asymptotically efficient for diffusions , we conjecture that our qmle is asymptotically efficient when the sde driven by a `` locally -stable '' ( see section [ sec_lslp ] for the definition ) ; see also remark [ rem_efficiency ]. the detailed study of which is beyond the scope of this paper , and one of important future works .[ rem_asymp.efficiency ] for general locally -stable pure - jump it - semimartingale models , there exist many results on asymptotic behavior of the power - variation statistics of the form }|n^{1/\beta}{\delta}_{j}x|^{p} ] ; * is strictly -stable ; * each admits a bounded continuous lebesgue density .suppose for a moment that satisfies the above condition under the choice with the weak limit being the standard symmetric -stable distribution corresponding to the characteristic function .we denote this distribution by : let us call any such a _ locally stable _ .the value is the blumenthal - getoor index defined by which measures degree of s jump activity .we note that many locally stable es with finite variance can exhibit large - time gaussianity ( i.e. central limit effect ) , in addition to the small - time non - gaussianity ; see and , as well as the references therein .this is consistent with the stylized facts observed in some actual phenomena .the property follows from a -stable - like behavior of around the origin .let us briefly discuss how to verify it . by with the symmetry of ,the random variable has no drift and its is given by then , according to ( * ? ? ?* theorem 8.7 ) we have as if and only if for every continuous bounded function vanishing in a neighborhood of the origin , where is the of , namely for where with ( * ? ? ?* lemma 14.11 ) .a convenient sufficient condition can be given in terms of the of the most active part of : for example , it suffices that for a neighborhood of the origin can be bounded below on by a -stable - like absolute continuous part .specifically , assume that is symmetric and decomposed as where in a neighborhood of the origin where for a bounded continuous non - negative function satisfying that , and where , \label{nu.flat_0}\ ] ] for some .the condition means that the -part of is strictly less active than the -part ; equivalently , writing with independent es and corresponding to the s and , respectively , we have and as , hence the locally stable property .the admits the given by the condition is satisfied by many specific lvy processes such as the generalized hyperbolic ( except for the variance gamma ) , student- , meixner , stable , and the ( normal ) tempered stable es .moreover , can distributionally approximate a by controlling dominating parameters in a suitable manner . under itis not difficult to show that , so that , thanks to sharpe s criterion , is everywhere positive and is always well - defined .although the the locally stable property is enough to motivate the construction of our quasi - likelihood given in section [ sec_sql.heuristic ] , we will need a stronger mode of convergence for derivation of our asymptotic results .let denote the bounded smooth lebesgue density of .our assumptions on given by are as follows : has a symmetric , and is locally -stable for some in the sense of .further , there exists a signed measure for which the signed measure weakly converges to in , where denotes the continuous lebesgue density of .[ a_j ] assumption [ a_j ] entails an -local limit theorem for with some convergence rate , and we will need this assumption especially for proving the stable central limit theorem for the quasi - score function evaluated at the true value .it is obviously automatic if , while otherwise verification is not so straightforward even when an explicit form of is available .a trivial sufficient condition is that there exist a constant and a positive sequence such that then , we have . here is an example where holds .let be an nig such that , where may be unknown .the probability density and the of are given by respectively .the distribution tends as to the standard cauchy distribution , whose probability density and are given by and , respectively .concerning assumption [ a_j ] , we note that , where with .we deduce from that : * as ; * as , hence for ] .since , the condition follows on picking a .[ j.ex_nig ] the above example is special since we do have the fully explicit .let us discuss how to verify assumption [ a_j ] in terms of .for this purpose we refer to the following lemma , which provides us with some easy conditions under which we can derive the rate of convergence in the sup - norm local limit theorem .assume that the decomposition holds with , , and , that both and are symmetric , and that there exists a constant such that then , there exits a constant such that with the value can be taken as follows : * if , for any ] and , where denotes the blumental - getoor index of the . [ lem_ejs.lem4 ] see ( * ? ? ?* lemma 4.4(b ) ) for the proof of lemma [ lem_ejs.lem4 ] with more details .we also refer to the following general inequality , which states that the -norm estimate can be deduced from the sup - norm estimate , with a slight loss in convergence rate .let and be probability densities on , and a number such that then we have [ lem_sl.lim.thm.1 ] this lemma was given in , the proof being simple : for any we have , hence optimizing the upper bound with respect to leads to . in the next corollarywe forget the assumption .assume that holds , that for some , and that then , hence in particular assumption [ a_j ] holds .[ cor_sl.lim ] it holds that , hence if and fulfils and for , then we can apply lemma [ lem_sl.lim.thm.1 ] with : we have under . in case of and , the condition reduces to this condition entails that , preferring a bigger . the criterion lemma [ lem_sl.lim.thm.1 ] is not sharp , as seen from example [ j.ex_nig ] : there , we can show that , so that holds with , while lemma [ lem_sl.lim.thm.1 ] only tells us that .let denote the closure of . the functions and are globally lipschitz and of class , 2 . and for each .3 . .[ a_coeff ] the standard theory ( e.g. ( * ? ? ?* iii 2c . ) ) ensures that the sde admits a unique strong solution as a functional of and the poisson random measure driving ; in particular , each is -measurable . the random functions and on ] reflects that virtually continuously evolves as time goes along , though not observable .we independently repeat the above procedures for times to get independent estimates , based on which we give boxplots and histograms for studentized versions ( corollary [ sqmle.iv_bda_thm.cor ] ) .we used the function ` optim ` in r , and in each optimization for we generated independent uniform random numbers and for initial values for searching and , respectively .we conduct the following two cases : * we know _ a priori _ that , and the estimation target is ; * the estimation target is . from the obtained simulation results , we observed the following. * figures [ fig : bp1 * 1_4panels ] and [ fig : hist1 * 1 ] : case of ( i ) . * * the boxplots show the clear tendency that for each estimate accuracy gets better for larger . * * the histograms show overall good standard normal approximations ; the straight line in red is the target standard normal density . for the cases where we can see downward bias of the studentized , although it disappears as increases. also observed is that the estimation performance of gets worse if the nuisance parameter gets larger from to .+ overall , we see very good finite - sample performance of , while that of may be affected to some extent by the value of .in particular , it is observed that the standard normal approximation of the studentized for is insufficient if is distributionally far from the cauchy ( i.e. if is large ) and is not large enough . as in the case of estimation of the diffusion coefficient for a diffusion type processes , for better estimating the value should not be so large , equivalently should not be so large . *figures [ fig : bp2 * 2_4panels ] and figures [ fig : hist2 * 2 - 1][fig : hist2 * 2 - 2 ] : case of ( ii ) . * * general tendencies are the same as in the previous case : for each , estimate accuracy gets better for larger , while the gain of estimation accuracy for larger is somewhat smaller compared with the previous case . * * the histograms clearly show that , compared with the previous case , the studentized estimators are of heavier tails and asymptotic bias associated with severely remains , especially for ( figure [ fig : hist2 * 2 - 2 ] ) , unless is large enough .the bias in estimating would disappear only slowly .unfortunately , in general it is theoretically hard to make a bias correction without specific information of , hence of the ; see also remark [ rem_bias.correction.hard ] .independent estimates ( light green ) and ( blue ) for , , ; ( upper left ) , ( upper right ) , ( lower left ) , and ( lower right ) . ] independent estimates ( light green ) and ( blue ) for , , ; ( upper left ) , ( upper right ) , ( lower left ) , and ( lower right ) . ] independent estimates ( light green ) and ( blue ) for , , ; ( upper left ) , ( upper right ) , ( lower left ) , and ( lower right ) . ]independent estimates ( light green ) and ( blue ) for , , ; ( upper left ) , ( upper right ) , ( lower left ) , and ( lower right ) . ]independent studentized estimates of ( light green ) and ( blue ) for , , ; ( upper left submatrix ) , ( upper right submatrix ) , ( lower left submatrix ) , and ( lower right submatrix ) . ] independent studentized estimates of ( light green ) and ( blue ) for , , ; ( upper left submatrix ) , ( upper right submatrix ) , ( lower left submatrix ) , and ( lower right submatrix ) . ]independent studentized estimates of ( light green ) and ( blue ) for , , ; ( upper left submatrix ) , ( upper right submatrix ) , ( lower left submatrix ) , and ( lower right submatrix ) . ]independent studentized estimates of ( light green ) and ( blue ) for , , ; ( upper left submatrix ) , ( upper right submatrix ) , ( lower left submatrix ) , and ( lower right submatrix ) . ]independent estimates ( light green ) , ( blue ) , ( pink ) and ( red ) for , , ; ( top ) , ( second from the top ) , ( second from the bottom ) , and ( bottom ) . ]independent estimates ( light green ) , ( blue ) , ( pink ) and ( red ) for , , ; ( top ) , ( second from the top ) , ( second from the bottom ) , and ( bottom ) . ]independent estimates ( light green ) , ( blue ) , ( pink ) and ( red ) for , , ; ( top ) , ( second from the top ) , ( second from the bottom ) , and ( bottom ) . ]independent estimates ( light green ) , ( blue ) , ( pink ) and ( red ) for , , ; ( top ) , ( second from the top ) , ( second from the bottom ) , and ( bottom ) . ]independent studentized estimates of ( light green ) , ( blue ) , ( cream ) and ( red ) for , , ; ( left submatrix ) and ( right submatrix ) . ]independent studentized estimates of ( light green ) , ( blue ) , ( cream ) and ( red ) for , , ; ( left submatrix ) and ( right submatrix ) . ]independent studentized estimates of ( light green ) , ( blue ) , ( cream ) and ( red ) for , , ; ( left submatrix ) and ( right submatrix ) . ]independent studentized estimates of ( light green ) , ( blue ) , ( cream ) and ( red ) for , , ; ( left submatrix ) and ( right submatrix ) . ]next we assume that with , hence the stable density is no longer explicit while numerically computable .numerical optimization in maximum - likelihood type estimation involving a non - gaussian stable density is unfortunately time - consuming . in our case ,given a realization of we have to evaluate +\log\phi_{\beta}\bigg(\frac{x_{t_{j}}-x_{t_{j-1}}-a(x_{t_{j-1}},{\alpha})h}{h^{1/\beta}c(x_{t_{j-1}},{\gamma})}\bigg ) \bigg\ } \nonumber\ ] ] repeatedly through numerical integration for computing .for this we used the function ` dstable ` in the r package ` stabledist ` , together with ` optim ` . as before ,we give simulation results for , . in order to observe effect of the terminal - time value we conduct the cases of and , for , , and . for studentization, we used the values and borrowed from ( * ? ? ?* table 6 ) .* figures [ fig : s_bp1 * 1_2panels ] and [ fig : s_hist1 * 1 ] show the boxplots and the histograms when for and . as is expected from our theoretical results, we can observe much better estimation accuracy compared with the previous nig - driven case ; recall that assumption [ a_j ] then automatically holds with .the boxplots and histograms reveal that the estimation accuracy of are overall better for larger , while at the same time a large can lead to a more biased unless is not so large . also , different from the nig driven casethere is no severe bias in estimating .somewhat surprisingly , the accuracy of studentization especially for the scale parameters may be good enough even for much smaller compared with the nig driven case : the standard normality is well achieved even for .* figures [ fig : s_bp2 * 2_2panels ] and [ fig : s_hist2 * 2 ] show the results for with or .the observed tendencies , including those compared with the nig driven cases , are almost analogous to the case of . in sum , our stable quasi - likelihood works quite wellespecially when is standard -stable , however , so small should be avoided for good estimation accuracy of . independent estimates ( light green ) and ( blue ) for , , ; ( left ) and ( right ) . ] independent estimates ( light green ) and ( blue ) for , , ; ( left ) and ( right ) . ]independent studentized estimates of ( light green ) and ( blue ) for , , ; ( left submatrix ) and ( right submatrix ) . ]independent studentized estimates of ( light green ) and ( blue ) for , , ; ( left submatrix ) and ( right submatrix ) . ]independent estimates ( light green ) , ( blue ) , ( pink ) and ( red ) for , , ; ( top ) and ( bottom ) . ]independent estimates ( light green ) , ( blue ) , ( pink ) and ( red ) for , , ; ( top ) and ( bottom ) . ]independent studentized estimates of ( light green ) , ( blue ) , ( cream ) and ( red ) for , , ; ( left submatrix ) and ( right submatrix ) . ] independent studentized estimates of ( light green ) , ( blue ) , ( cream ) and ( red ) for , , ; ( left submatrix ) and ( right submatrix ) . ]throughout this section , we suppose assumptions [ a_j ] and [ a_coeff ] .most of the key moment estimates involved in the proofs , such as burkholder s inequality , can not directly apply if is heavy - tailed .we begin with a localization of the underlying probability space by eliminating big jumps of , thus enabling us to proceed as if for every .the point here is that , since our main results are concerned with the weak properties over the fixed period ] a solution process to the sde which obviously admits a strong solution as a functional of .we have for } \nn\ ] ] in what follows ._ for notational convenience , we keep using the notation instead of . then , some important remarks are in order .* we may suppose that there exists a positive constant such that .\nn\ ] ] it then follows from ( * ? ? ? * theorem 2(a ) and ( c ) ) that * following the argument ( * ? ? ?* section 2.1.5 ) together with gronwall s inequality under the global lipschitz condition of , we see that for any and ] . * if for some ( see lemma [ lem_ejs.lem4 ] ) , then for throughout this section , we focus on the random function where and are measurable functions .this form of will appear in common in the proofs of the consistency and asymptotic mixed normality of the sqmle , and the results in this section will be repeatedly used later .since we are assuming that the parameter space is a bounded convex domain , the sobolev inequality is in force : for each , we have to complete the proof , it therefore suffices to show that both and are -bounded for each and .fix any and in the rest of this proof . put , so that . under the present regularity conditions we may pass the differentiation with respect to through the operator : for each , the sequences and form martingale difference arrays with respect to , hence burkholder s inequality gives for .the required -boundedness of follows on showing that of .observe that for and , by we have }\e(|h^{-1/\beta}j_{h}|^{r})<\infty ] . ] ( see ( * ? ? ?* theorems 3.2.1b ) and 3.2.2a ) ) ) : for , let denote the formal infinitesimal generator of : where the second term in the right - hand side is well - defined. then obviously , we have for such that the derivatives for exist and have polynomial majorants .we begin with . applying with and then taking the conditional expectation ,we get write and . using and noting that is essentially bounded , we get in view of the expression and jensen s inequality , the claim follows if we show . \label{u'_so.3}\ ] ] by we may express as where with being at most polynomial growth in , and where ; by the regularity conditions on and we have .hence , for it suffices to prove let for ] is an essentially bounded martingale . according to the martingale representation theorem ( * ? ? ?* theorem iii.4.34 ) , the process can be represented as a stochastic integral of the form , \label{u'_so.5}\ ] ] with a bounded predictable process such that we now look at the left - hand side of : taking the conditioning with respect to inside the sign `` '' , substituting the expression with , and then applying the integration - by - parts formula for martingales , we see that it equals by means of jensen and cauchy - schwarz inequalities and the bound together with , we can estimate the -supremum - absolute moment ( set ) of the last quantity as follows : du \nn\\ & \lesssim \frac{1}{h}\int_{t_{j-1}}^{s}\e\left\ { \e^{j-1}\left ( 1 + |x_{u}|^{c } \right ) \right\ } du \nn\\ & \lesssim \frac{1}{h}\int_{t_{j-1}}^{s}\e\left ( 1 + |x_{t_{j-1}}|^{c } \right)du \lesssim 1 .\nonumber\end{aligned}\ ] ] thus we obtain , concluding that .next we consider .using the martingale representation for as before , we have as in the case of we have , hence we can write as where for ] , the processes and for ] , hence there correspond predictable processes and , such that and and that .thus , using the integration by parts formula as before we can rewrite as by means of cauchy - schwarz inequality we see that the first term in the right - hand side is , hence so is .for any measurable function such that we have } f(x_{t_{j-1}},\theta ) -\frac{1}{t}\int_{0}^{t}f(x_{s},\theta)ds\bigg|{\xrightarrow{p}}0 . \nonumber\ ] ] [ lem.aux.lln ] the target quantity can be bounded by }\frac{1}{h}\int_{j}\sup_{\theta}|f(x_{s},\theta)-f_{j-1}(\theta)|ds + \frac{h}{t}\sup_{\theta}\sup_{t\le t}|f(x_{t},\theta)| \nn\\ & \lesssim \frac{1}{n}\sum_{j=1}^{n}\frac{1}{h}\int_{j}(1+|x_{t_{j-1}}|+|x_{s}|)^{c}|x_{s}-x_{t_{j-1}}|ds + \frac{h}{t}\bigg ( 1+\sup_{t\le t}|x_{t}|^{c}\bigg ) .\nonumber\end{aligned}\ ] ] by the expectation of the upper bound is at most , hence the claim .\(1 ) for , we write as the sum of and where pick a so that holds . since , hence we have the bounds : and , in particular , likewise , noting that the same upper bound as in remains valid for the function instead of , we see that can by bounded by a sum of the terms : a constant multiple of ( coming from the term involving ) and the claim now follows from lemma [ lem.aux.lln ] .\(2 ) for , we have as with the case , it can be seen that the first term in the rightmost side of equals uniformly in .moreover , by the boundedness of and the estimate , the second term is uniformly in .hence lemma [ lem.aux.lln ] ends the proof of the first half . under the conditions in lemma [ lem_aux.se_r1 ]( implied by those in lemma [ lem_aux.se1 ] ) , it follows from that if further is odd , then by the symmetry of the density we have for each .the identity then becomes , hence the latter claim in ( 2 ) .let and be compact sets , and let be a random function of the form for some positive non - random sequences and and some continuous random functions and .let be a non - random vector .assume the following conditions : the assumption implies that in .let the second term in the rightmost side is uniformly in , so that in with the limit a.s .uniquely maximized at . since ,the argmax theorem ( e.g. ) concludes that .we can follow a similar way to deduce along with replacing by which has the continuous limit process in , uniquely maximized at a.s .returning to our model , we make a couple of remarks concerning assumption [ a_iden ] .let us recall the notation and .for , we define the random functions and by \phi_{\beta}(z)dzdt , \label{def_y0b.a } \\ { \mathbb{y}}_{\beta,2}(\theta ) & = \frac{1}{2t}\int_{0}^{t}{\mathfrak{b}}^{2}(x_{t},\theta ) \int{\partial}g_{\beta}\bigg(\frac{c(x_{t},{\gamma}_{0})}{c(x_{t},{\gamma})}z\bigg)\phi_{\beta}(z)dzdt . \label{def_y0b.g}\end{aligned}\ ] ] we also define by \phi_{1}(z)dzdt .\nn\end{aligned}\ ] ] these three functions are a.s .continuous in .since the function defines a density for every constants and , assumptions [ a_coeff ] and [ a_iden ] together with jensen s inequality ( applied -wise ) imply that the -integrand in is a.s .non - positive and holds only when the -integrand is zero for ] : } \frac{{\partial}_{{\alpha}}a_{j-1}({\alpha}_{0})}{c_{j-1}({\gamma}_{0})}g_{\beta}({\epsilon}_{j}),\ , -\frac{1}{\sqrt{n}}\sum_{j=1}^{[t / h ] } \frac{{\partial}_{{\gamma}}c_{j-1}({\gamma}_{0})}{c_{j-1}({\gamma}_{0})}\left\{1+{\epsilon}_{j}g_{\beta}({\epsilon}_{j})\right\ } \bigg),\quad t\in[0,t ] .\nonumber\ ] ] let so that .write and respectively for and with the integral signs `` '' in their definitions replaced by `` '' . by means of (* theorem 3 - 2 ) ( or ( * ? ? ?* theorem ix.7.28 ) ) , the stable convergence is implied by the following conditions : for each ] , and by lemma [ lem.aux.lln ] the first term in the right - hand side converges in probability to .the uniform convergence follows on applying and lemma [ lem.aux.lln ] : }\pi_{j-1}\e^{j-1}\{\eta({\epsilon}_{j})\ } & = \frac{1}{n}\sum_{j=1}^{[t / h]}\pi_{j-1}\bigg(\sqrt{n}\int\eta(z)f_{h}(z)dz\bigg ) + o_{p}(\sqrt{n}h^{2 - 1/\beta } ) \nn\\ & = \frac{1}{n}\sum_{j=1}^{[t / h]}\pi_{j-1}\left\{\left(0,\ , b_{\beta}(\nu)\right ) + o(1)\right\ } + o_{p}(h^{3/2 - 1/\beta } ) \nn\\ & = \tcc{\bigg(0,\ , -\frac{1}{t}\int_{0}^{t}\frac{{\partial}_{{\gamma}}c(x_{s},{\gamma}_{0})}{c(x_{s},{\gamma}_{0})}ds b_{\beta}(\nu)\bigg ) } + o_{p}(1 ) \nn\\ & = \big(0,\ , \mu_{t,{\gamma}}(\tz;\beta)b_{\beta}(\nu ) \big ) + o_{p}(1 ) , \nonumber\end{aligned}\ ] ] all the order symbols above being uniformly valid in ] are bounded , and we have }{\delta}_{j}m { \xrightarrow{a.s.}}m_{t} ] .let }\frac{1}{\sqrt{n}}\pi_{j-1}\eta({\epsilon}_{j } ) .\nonumber\ ] ] for each , is a local martingale with respect to , and equals that for each . the angle - bracket process }\pi^{2}_{j-1}\e^{j-1}\{\eta({\epsilon}_{j})\ } \nonumber\ ] ] is -tight , that is , it is tight in ;{\mathbb{r}}) ] .further , for every , as in the case of we have we conclude from ( * ? ? ?* theorem vi.3.26(iii ) ) that is -tight .fix any . by (* theorem vi.3.33 ) the process is tight in ;{\mathbb{r}}) ] . by wehave hence it follows from ( * ? ? ?* corollary vi.6.30 ) that the sequence is predictably uniformly tight , in particular , ) { \xrightarrow{{\mathcal{l}}}}(h , [ h]) ] a.s .identically ( cf .* theorem i.4.52 ) ) .thus we have seen that given any we can find a further subsequence for which {\xrightarrow{{\mathcal{l}}}}0 ] .the setting of the underlying filtration is not essential .we could enlarge it as long as the martingale - representation arguments stay valid .even when the underlying probability space carries a wiener process , we may still follow the martingale - representation argument as in .[ rem_sclt.proof ] the components of consist of by and corollary [ prop_mainulln.cor ] for , the first term in the right - hand side of is .hence proposition [ prop_mainulln ] gives as for , by proposition [ prop_mainulln ] and we see that the first term in the right - hand side of is .as for the second term , noting the function satisfies that , we get finally , since is odd , proposition [ prop_mainulln ] concludes that completing the proof of .according to the continuity of the random mapping , by applying the uniform law of large numbers presented in lemma [ lem.aux.lln ] we can deduce the convergences : , , and . then follows from , , and , combined with the continuous mapping theorem .additionally assumption [ a_ergo ] , and [ a_ergo.iden ] in place of assumption [ a_iden ] are in force .most parts are essentially the same as in the proof of theorem [ sqmle.iv_bda_thm ] , hence we only give a few comments .first we note that the localization introduced in section [ sec_localization ] is not necessary here , since under the assumed moment boundedness for any and the global lipschitz property of , by the standard argument we can deduce a large - time version of the latter inequality in : obviously , and remain the same .lemmas [ lem_aux.se1 ] and [ lem_aux.se_r1 ] stay valid as well .write .by we have for each , hence it suffices to show the tightness of in , which implies the tightness of in .this is obvious , for we have .having lemma [ lem.aux.lln_ergo ] in hand , we can follow the contents of sections [ sec_proof.consistency ] , [ sec_proof.amn ] , and [ sec_lln.scle.proof ] .the proof of the central limit theorem is much easier than the mixed normal case , for we now have no need for looking at the step processes introduced in section [ sec_score.scle.proof ] and also for taking care of the asymptotic orthogonality condition ; by means of the classical central limit theorem for martingale difference arrays ( e.g. ) , we only have to show all of which can be deduced from the same arguments we have used in section [ sec_score.scle.proof ] .o. e. barndorff - nielsen , j. m. corcuera , and m. podolskij .limit theorems for functionals of higher order differences of brownian semi - stationary processes . in _ prokhorov and contemporary probability theory _ , volume 33 of _ springer proc_ , pages 6996 .springer , heidelberg , 2013 .a. dvoretzky .asymptotic normality of sums of dependent random vectors . in _multivariate analysis , iv ( proc .fourth internat ., dayton , ohio , 1975 ) _ , pages 2334 .north - holland , amsterdam , 1977 .i. s. gradshteyn and i. m. ryzhik . .elsevier / academic press , amsterdam , seventh edition , 2007 . translated from the russian , translation edited and with a preface by alan jeffrey and daniel zwillinger , with one cd - rom ( windows , macintosh and unix ) .j. jacod . on continuous conditional gaussian martingales and stable convergence in law .in _ sminaire de probabilits , xxxi _ , volume 1655 of _ lecture notes in math ._ , pages 232246 .springer , berlin , 1997 .j. jacod . statistics and high - frequency data . in _ statistical methods for stochastic differential equations _ , volume 124 of _ monogr ._ , pages 191310 .crc press , boca raton , fl , 2012 .h. masuda . on quasi - likelihood analyses for stochastic differential equations with jumps . in _ int .statistical inst .58th world statistical congress , 2011 , dublin ( session ips007 ) _ , pages 8391 , 2011 .v. m. zolotarev . , volume 65 of _ translations of mathematical monographs_. american mathematical society , providence , ri , 1986 . translated from the russian by h. h. mcfaden , translation edited by ben silver .
|
we address estimation of parametric coefficients of a pure - jump lvy driven univariate stochastic differential equation ( sde ) model , which is observed at high frequency over a fixed time period . it is known from the previous study that adopting the conventional gaussian quasi - maximum likelihood estimator may result in significant efficiency loss when the driving noise distributionally deviates from the gaussianity , and that the estimator is even inconsistent without large - time observing period . in this paper , under the assumption that the driving is locally stable , we propose a novel quasi - likelihood function based on the small - time non - gaussian stable approximation of the unknown transition density . the resulting estimator is shown to be asymptotically mixed normally distributed , and , compared with the gaussian quasi - maximum likelihood estimator , it enables us to achieve much better theoretical performance in a unified manner for a wide range of the driving pure - jump es . it is noteworthy that the proposed estimator requires neither ergodicity nor moment conditions . an extensive simulation is carried out to show good estimation accuracy . the case of large - time asymptotics under ergodicity is briefly mentioned , where we can deduce an analogous asymptotic normality result .
|
the last two decades have seen an explosion in the study of complex systems , caused by the increasing relevance for society of such large interconnected structures , and by an unprecedented availability of data to analyze them .many of these systems can be modelled as networks , in which the system elements are represented as nodes , and their interactions as connections , or edges , linking them .the network representation of complex systems has been used in the social sciences , in biology , and in studies of technological systems and communication systems .more recent work has focussed on the multilayer nature of complex networks , introducing a new framework that is particularly useful for the analysis of large complex data sets .researchers have applied complex systems techniques to a wide range of disciplines , identifying and analyzing several defining features of complex networks , such as the small world property , heterogeneous degree distributions , clustering , degree - degree correlations , assortativity , synchronizability , and community structure .communities were originally studied in the context of social networks , in which they are formed by groups of people that share close friendship relations .however , communities of densely connected modules have been observed in several real - world and model networks of diverse nature , where , in general , they are defined as groups of nodes whose internal connections are denser or stronger than those that link nodes belonging to different groups . in all these cases , the presence of communities directly influences the behaviour of the system , where there is often a correspondence between communities and functional units .ever since the discovery of community structure in real - world networks , a plethora of techniques devoted to their detection has been introduced . the challenge is both theoretical , in proposing a good mathematical definition of what constitutes a community , and computational , in developing good heuristics that can detect communities in a reasonable time .a common way of investigating the community structure of networks starts with the definition of a quality function , which assigns a score to any network partition .larger scores correspond to better partitions , and algorithms are created to find the partition with the largest score . by far ,the most common and used of such quality functions is modularity , that works by comparing the number of links inside each community to the number of links that would be expected if the nodes were connected at random , without any preference for links within or outside the community .a partition with a large modularity indicates that the communities have many internal links and few external ones , when compared to a randomized version of the network .however , despite its success , modularity also has some shortcomings , decreasing its general usefulness . in this article, we study a new quality function , _ modularity density _, that was originally introduced in and that has been shown to address the limitations of traditional modularity .we present a detailed analysis of its properties on synthetic networks typically used to evaluate quality functions , as well as on random graphs , which are a commonly used benchmark to test community detection methods .we also present some limitations that need to be taken into consideration when using methods based on modularity density .in addition , we describe a new community detection algorithm based on this metric , whose computational complexity is quadratic in the number of nodes , and validate it on synthetic and real - world networks , showing that it performs better that other currently available methods .also , we argue that the nature of modularity density allows for a direct quantitative comparison of community structures across networks of different sizes .the modularity of a network with nodes and links is defined as : where is the adjacency matrix of the network , is the degree of node , is the community to which node is assigned and is the kronecker delta . the first term accounts for the presence or absence of a link between node and node ; the second term , instead , is the expected number of links between node and node in a random network with the same degree sequence as the original one .a first limitation of modularity is that it is intrinsically dependent on the number and distribution of edges , rather than on the number of nodes . to see this ,denote by and the number of internal and external links of community , respectively .moreover , let be the sum of the degrees of the nodes in community . with this notation ,it is \:,\ ] ] where denotes the set of all communities in the partition . in this expression, each term in the sum refers to a different community .the first factor of each term corresponds to the internal density of links in the community , whereas the second factor encodes the expected density of links in the random network null model .now , introduce the positive parameter , representing the ratio of external links to internal ones : the value of is smaller for strong communitites , and higher for weaker ones .then , we can write \:.\ ] ] from this expression , it is clear that a community gives a positive contribution to only if : this implies that the condition for a community to give a positive contribution only depends on the number of edges in the community and on the total number of edges in the network , but not explicitly on the number of nodes .a similar result can be obtained considering a network of communities disconnected from each other , along the lines of . under the assumption that all groups have the same number of links, we can write then , from , it is =1-\frac{1}{\kappa}\:.\ ] ] this shows that modularity converges to 1 with the number of communities regardless of the internal properties of the communities , such as their size , or the number of internal edges .as long as is very large and all communities have the same number of edges , a network of disconnected trees has the same modularity of a network of disconnected cliques .as before , we also see that the number of nodes in each group does not explicitly contribute to , and , as an immediate consequence , a network composed of few cliques has a smaller modularity than a network composed of many disjoint trees .in addition to these results , the effectiveness of modularity is not constant for all edge densities . to determine its dependence on this quantity , we follow and connect the groups in a ring configuration , where each community is linked with exactly one edge to the next one , and one edge to the previous one in the ring , for a total of inter - community edges . in this scenario, we have from , it follows that = 1-\frac{\kappa}{m}-\frac{1}{\kappa}\:.\ ] ] for constant , this expression reaches its maximum when , for which it is thus , the highest modularity corresponds to a partition in modules .once again , the number of nodes in the communities does not affect its largest possible value .this major limitation of modularity is known as the _ resolution limit _ , and it indicates that modularity , as a quality function for community detection , has an intrinsic scale proportional to .the number and size of the communities that can be detected via modularity maximisation are bound to adhere to this limit , posing a serious question on the significance of results obtained with this method .in fact , in a more general framework , fortunato and barthlemy have shown that , under some circumstances , the resolution limit can even force pairs of well - defined communities to be merged into a larger cluster , because this corresponds to a higher modularity .finally , it is worth noting that the trivial partition where all the nodes are put together in one single community , namely the whole network itself , has a modularity of 0 .this can be easily seen from , since in this case the sum has only one term , and , so at first , this might seem a desirable property for a quality function , since , intuitively , the trivial partition should not have a positive modularity .however , this implies that any partition that achieves a modularity larger than 0 is retained as a valid community structure . since community detection algorithms try to maximize modularity , it is often the case that such a positive value can be found even on erds - rnyi random graphs . to stress this point ,the trivial partition with can always be considered , but since one is interested in the maximum value of , it is often discarded in favour of a clustering that achieves any positive value of modularity .this poses a serious limitation to the ability of modularity - based algorithms to partition random graphs correctly .several variants of modularity have been proposed to address the resolution limit .for instance , multi - resolution methods , such as the one described in , introduce an additional tunable parameter in the expression for : \:.\ ] ] larger values of cause to be larger for partitions with smaller modules , whereas smaller values favour larger communities .however , this approach suffers from similar limitations to those presented by the original modularity . in particular , has two contrasting behaviours : small clusters tend to be merged together , while large communities tend to be split into subgroups .networks in which all the communities are of comparable size are immune to this problem , and one can find a value of for which they can all be resolved .however , the existence of an optimal is not guaranteed in the general case . in particular , for networks whose community sizesare heterogeneously distributed , e.g. , following a power law , it is not possible to find a value of that avoids both problems .the reason for this is that the nature of the resolution limit is more general than the specific definitions of modularity and its multi - resolution extension .several quality functions for community detection , including the one just mentioned , can be derived within the general framework of a first principle potts model with hamiltonian \delta_{c_ic_j}\:,\ ] ] where and are non - negative weights .different choices for the weights result in different quality functions . however , only those using non - local weights can be truly free from the resolution limit , while all others , including modularity , multi - resolution modularity and functions based on quantities such as betweenness , shortest paths , triangles and loops , can never avoid it .recently , a new quality function called _ modularity density _ has been proposed to overcome the issues outlined above . given a network partition , modularity densityis defined as ^ 2 -\sum_{\widetilde{c}c}\frac{m_{c\widetilde{c}}^2}{2mn_cn_{\widetilde{c}}}\right\rbrace\:,\ ] ] where is the number of nodes in community , the internal sum is over all communities different from , and is the number of edges between community and community .this new metric brings two major improvements over traditional modularity .first , it contains an explicit penalty for edges connecting nodes in different communities .this addresses the problem of the splitting of large communities , since each split introduces external links and is thus penalized .second , all terms , including the penalty for inter - community edges , are explicitly weighted by the community sizes .therefore , a partition with many edges linking two small communities is penalized more than one with the same number of edges linking two large ones .thus , modularity density introduces local dependencies that are not found in traditional of modularity .additionally , it is not related to the potts model hamiltonian , thus avoiding the resolution limit problem . note that requires , which implies that partitions with communities consisting of an isolated node are not allowed . to investigate the properties of modularity density in more depth ,rewrite the expression for as \:,\ ] ] where the parameter can assume values between 0 and 1 , since it is the fraction of possible internal links actually present in community .thus , it measures the connection density of the community , or , equivalently , the probability that two random nodes inside are connected . fromit is clear that having many internal edges is not enough for a community to give a large contribution to modularity density .in fact , a strong community is one where the density of edges , rather than their number , is large .this also agrees with the intuitive notion that a community is a group of nodes that are densely connected amongst each other .thus , a good partition is one that is characterized at the same time by a large number of intra - community links and a high density of edges within the communities .modularity density achieves this by accounting for the number of nodes in each group and , in this sense , it has a more natural dependence on the local properties of the network and of the partition under consideration than does traditional modularity .next , it is instructive to study the behaviour of modularity density in the same cases described in the previous section .first , consider a network partitioned in just two communities , and .the contribution to of community is : introducing the proportionality constant as before , it is \:,\ ] ] where we used and . unlike what happens with traditional modularity , the contribution of a single community depends explicitly on the number of internal links _ and _ on the size of the community itself .disconnected modules , the modularity density depends only on and the edge density of the communities .fixing one of the two parameters , always increases with the other.,scaledwidth=75.0% ] consider now again a network composed of disjoint communities . assuming that each community has the same number of nodes and the same number of edges , the modularity density of such a network is : =p_\kappa^\star\left(1-\frac{p_\kappa^\star}{\kappa}\right)\:,\ ] ] where is the connection density of the communities .the first major difference between and is that depends not only on the number of communities , but also on their density of edges , unlike traditional modularity , which only depends on .also , for a fixed value of , increases with ( see ) .this is remarkable , since it indicates that the strength of the partition increases as more links are added within each group , in striking opposition with the behaviour of traditional modularity .we also note that for a fixed value of , modularity density increases with the number of communities .its theoretical maximum is reached in the limiting case of an infinite number of communities , with the special requirement that they are all cliques . moreover , in one more substantial difference with traditional modularity , a network composed of few cliques in general has a higher modularity density than a network composed of an infinite number of sparse communities . finally , we study the test case of the ring of communities each linked by a single edge to the next community and a single edge to the previous one . as before, it is and , in addition , and . introducing the variables and can write the modularity density as \:.\ ] ] the optimal number of communities is the one that maximizes this expression , or , equivalently , the one for which its derivative vanishes .differentiating with respect to , we obtain with this expression does not have a simple general root in terms of .rather , the solutions depend on the local and global properties of the network .thus , the number of groups does not seem to be constrained by an intrinsic scale of order . as briefly discussed above , a major drawback of traditional modularityis that algorithms based on its maximization often find supposedly viable partitions on graphs with no ground - truth community structure . in such cases ,the correct partition is either the one where all nodes are placed together , or the one with communities , each consisting of a single node . in either case ,modularity vanishes .thus , modularity - maximizing algorithms often suggest spurious community structures simply beause they have a non - zero modularity .conversely , from it follows that the one - group partition has a modularity density where is the network density .note that this expression is a parabola , whose roots are and , which are the fully disconnected and fully connected graphs , respectively .thus , a partition s needs not only to be positive , but also to lie above the parabola for an algorithm based on modularity density maximization to accept it .we will see that this makes such algorithms not find communities on random graphs , as should be the case for a reliable community detection method .having discussed the advantages of modularity density as a quality function , we propose a community detection algorithm based on its maximization . currently , the only published modularity density algorithm is based on iterations of two steps , namely splitting and merging .the algorithm is divisive , starting from a partition where all the nodes are placed in a single community and then using bisections .each splitting is performed using the fiedler vector of the network , which is the eigenvector of the graph laplacian corresponding to the second smallest eigenvalue . the graph laplacian is defined as , where is the diagonal matrix of the node degrees .the merging steps try to merge pairs of communities together if doing so improves the current partition .the two steps are repeated until the partition can not be improved any longer , and the algorithm is deterministic , meaning that the same initial network always yields the same partition . here ,we extend and adapt an existing modularity maximisation algorithm , originally proposed in , which achieves the largest published scores of traditional modularity . along the lines of the original method ,our algorithm consists of four main steps , which we describe below .[ imple ] contains a fully detailed discussion of the algorithm implementation and its computational complexity . in this step, we try to bisect the community under consideration ( see a ) . to do so, we use the leading eigenvector of the modularity matrix . despite suffering from the limitations discussed above, modularity still provides a good initial guess for a partition that is then refined by the subsequent steps . after every bisection, the partition can be often improved by using a variant of the kernighan - lin algorithm .we consider moving every node from the community into which it was assigned to the other ( see b ) .every such move would result in a change of the quality function , and we perform the move yielding the largest of such changes .note that we introduce here a non - deterministic factor : given a tolerance parameter , we consider all moves achieving a change of modularity density within the interval ] , where is the largest increase in modularity density achieved by any move .we build a decision tree by progressively merging pairs of communities , until there is only a single community left .we then look at the nodes in the tree corresponding to the largest increase in modularity density but , in difference from the previous steps , if more than one node results in the same increase , we select the one with the smallest number of communities .the whole step is repeated until the current partition can not be improved further .disconnected communities .the predictions of ( solid lines ) are confirmed by numerical simulations throughout the range of and for different values of . for each , we consider groups with 50 , 100 and 500 nodes , respectively . additionally , and as expected , we observe that the value of modularity density does not depend on the number of nodes in each community , but only on the number of communities and their internal density .each point is the average over 100 network realizations.,scaledwidth=75.0% ] with these four steps , the algorithm can be summarized as : * start with a single community containing all nodes .* try to bisect the network using the leading eigenvector of the modularity matrix . *if the bisection was successful , then perform a fine tuning step .* iterate the bisection and fine tuning steps on each of the communities in the current partition , until no further splitting and refinement can be performed . *perform the final tuning step . *perform the agglomeration step .* repeat the sequence of steps until it is no longer possible to find an increase in modularity density .as described in detail in [ imple ] , the worst - case computational complexity of the full algorithm is .to validate our algorithm , we test it on several synthetic and real - world networks .first , we verify that it reproduces the theoretical predictions on networks of disconnected communities and on rings of modules , discussed in and .then , we analyze its behaviour on random networks belonging to different ensembles . finally , we run it on a set of benchmark networks , comparing the results with the best ones currently published . and compare the theoretical values of modularity density with the results of our algorithm . in panel ( a ) we consider networks of fully connected cliques , finding a perfect agreement between theoretical value ( solid line ) and simulations ( squares ) . in panel( b ) , we build networks with different fixed values of and vary their internal density . note that , differently from ( a ) , here the groups are not fully connected .the theoretical values ( lines ) and simulation results match precisely . in both panels ,each point is the average over 100 realizations of the same network.,title="fig:",scaledwidth=45.0% ] and compare the theoretical values of modularity density with the results of our algorithm . in panel ( a ) we consider networks of fully connected cliques , finding a perfect agreement between theoretical value ( solid line ) and simulations ( squares ) . in panel ( b ) , we build networks with different fixed values of and vary their internal density .note that , differently from ( a ) , here the groups are not fully connected .the theoretical values ( lines ) and simulation results match precisely . in both panels ,each point is the average over 100 realizations of the same network.,title="fig:",scaledwidth=45.0% ] first , we consider networks formed by disconnected communities. indicates that the modularity density of such networks depends only on the connection probability and on itself , but not on the size of each community .we find an exact agreement between the simulation results and the theoretical prediction for all the values of ( ) .we also note that the values of modularity density found in the simulations do not depend on the number of nodes in the communities . as a second test ,we simulate two types of ring networks of communities .we start by making the communities cliques of 5 fully connected nodes , and vary from 3 to 20 . from , the expected modularity density of these networks is \:.\ ] ] the comparison between the modularity density predicted by this expression and the values obtained in our simulations is shown in ( a ) .we find a precise agreement between the two , showing that our algorithm correctly identifies the cliques without splitting them , and finds the right value of modularity density .next , we build ring networks in which we fix and vary the community density .each community contains 50 nodes , and we vary from to 1 , performin the test for , and .the results , in ( b ) , show a perfect agreement in all cases , again indicating that our algorithm correctly partitions the networks . as we argued in the previous sections ,a desirable feature of a community detection algorithm is that it does not propose a complex partition of graphs without ground - truth community structure . to verify that our algorithm satisfies this requirement , we test it on erds - rnyi random graphs . for graphs in this ensemble ,every possible edge between nodes exists independently with probability .thus , the average number of edges is .these networks do not have any true community structure , since all their edges are fully random , and thus they are one of the benchmarks against which community detection algorithms are often tested . for our simulations , we create networks with values of from to and number of nodes 50 , 100 , 500 and 1000 .the results , in , show that for all network sizes , the average modularity density matches almost perfectly the theoretical prediction . even for small networks , where finite - size effects are largest ,the values lie in close proximity to the theoretical parabola and we can only observe a small deviation for the smallest networks at low values of . also note that all the results collapse on the theoretical curve , which does not depend on network size .these results represent a major improvement over modularity - based algorithms , that typically detect communities even on erds - rnyi networks .in addition , erds - rnyi networks are locally tree - like for low enough values of , and highly clustered for close to 1 .thus , the results also indicate that modularity density is highly effective in detecting when no real communities exist in locally tree - like graphs , and does not introduce spurious modules even when the clustering increases .in fact , also the limiting case of fully connected graphs , which corresponds to a link probability identically equal to 1 , is properly identified by our method . as a special case of random networks, we also study random regular graphs .random regular graphs are random networks where all nodes have the same degree , but the edges are still randomly placed . using the algorithm described in ref . , we create random regular graphs with 100 , 500 and 1000 nodes . for each of the three network sizes , we consider degrees ranging from 4 to 20 , 100 and 500 , respectively . for every pair of size and degree , we generate 100 network realizations , on which we run our community detection algorithm .the results , depicted in , show a good agreement between theoretical predictions and simulations .the only exceptions are three cases that correspond to the sparsest graph of each given size .these results show one of the major strengths of modularity density .however , it is well known that most real - world networks are not well represented by erds - rnyi graphs or random regular graphs .rather , they are characterized by heterogeneous degree distributions .thus , to further verify the performance of our algorithm , we test it on lfr networks .these constitute a set of widely - used benchmark networks , whose distributions of degrees and community sizes follow a power - law . for our tests, we fix the network size to and vary the other parameters , namely the exponent of the degree distribution , the mean degree and the largest degree . also , we ensure that the networks thus created contain a single community , so that no actual community structure is present. we run our algorithm on the networks thus generated and compare its results with the theoretical expectations .the results , presented in , show that for and , the modularity density found by the algorithm closely follows the predicted value for networks of all densities .we do observe , however , some deviations from the predicted values at .this is probably due to the fact that , asymptotically , no networks exist with a pure power - law degree distribution for . thus, in the limit of , and particularly for low densities , a spurious structure of stars with bridges appears , effectively introducing communities in the networks .nodes and varying parameters . in particular, we let the mean degree assume the values 15 , 25 , 35 , 44 and 55 , and the largest degree be 150 , 200 and 250 .for each combination of the parameters , we generate 100 networks and for each we record the edge density and the largest modularity density our algorithm finds .the plot shows considerable agreement between the theoretical modularity density ( solid line ) and the one found by the algorithm .the only deviations appear for and low , and they are probably due to the breakdown of the lfr model for this limiting value of the degree distribution exponent.,scaledwidth=75.0% ] we now verify the performance of our algorithm on some well known networks , for which results of the maximum modularity density obtained so far are available .the first is zachary s karate club network .this is a friendship network between 34 members of a karate club in a u.s .university during the 1970s and it has become one of the most standard benchmarks to test community detection algorithms .the interest in this network lies in the fact that , not long after it was recorded , the club split into two subgroups due to internal problems between two members , namely the manager and the coach .thus , a traditional challenge is to be able to detect these two groups based only on the friendship data available in the network topology , under the assumption that the members would decide to follow whichever leader they were more strongly related to between the coach and the manager .of the 561 possible edges in the network , only 78 of them are present , making the network fairly sparse , with an effective connection probability . a second benchmark network we consider is the american college football club network .here , the nodes represent different college football clubs and an edge connects two teams if there has been a regular - season game between them during the 2000 season .this network is known to have a natural community structure because the teams are divided into different leagues , thus making matches between teams more or less likely depending on the group they belong to . benchmark & & & & + karate club & & & & + football club & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + lfr , & & & & + finally , we consider again some lfr benchmark networks , choosing a set of parameters for which already published results exist .presents a comparison between the results obtained using our algorithm and the best results available in the literature .note that currently there is only one other algorithm based on modularity density .because of the stochasticity within our method , for each value of the mixing parameter , we create 10 realizations of the network and run the algorithm 100 times on each , reporting the average maximum modularity density found . in all cases considered ,our algorithm finds a partition with higher modularity density than the best one currently published .so far , we have shown that our algorithm identifies the correct value of modularity density on a range of test networks .however , it is also worth noting that methods based on modularity density present some limitations in special cases . to show this, we analyze regular ring lattice networks .these are graphs composed by a ring of nodes , each connected to a number of neighbours in each direction .we consider networks of 100 , 500 and 1000 nodes , for which the number of neighbours of each node varies from 8 to 40 , 200 and 500 , respectively .the results , depicted in , illustrate how the theoretical prediction and the simulated results differ on almost all cases considered .in fact , with one exception , there is always a set of communities with a higher value of modularity density than the one corresponding to the trivial partition .trees are another special case where modularity density exhibits some shortcomings . to see this , consider a tree with nodes . if all the nodes are put in a single community , the modularity density is given by : if instead one divides the nodes between two different communities of equal size and equal number of internal links , it is where we used the fact that , for a tree , there can only be one link between the two communities if they do not consist of disconnected components .it follows that , for , , that is , for trees with more than 6 nodes , a partition in two equally sized communities always has a larger modularity density than the trivial partition .more generally , partitions with a larger number of equally sized communities tend to have a larger score , as can be seen in .communities are a fundamental structure that is often present in real - world complex networks .thus , the ability to accurately and efficiently detect them is of great relevance to the analysis of complex data sets . despite their success , traditional methods based on modularityhave been shown to suffer from limitations .we have presented a detailed analysis of the properties of modularity density , an alternative quality function for community detection , showing that it does not suffer from the drawbacks that affect traditional modularity . in particular, modularity density does not depend separately on the size of the network or the number of edges , but only on the combination of these two properties in terms of the density of links within the communities . as a consequence, it allows a direct quantitative comparison of the community structure across networks of different sizes and number of edges . at the light of these considerations ,we have introduced a new community detection algorithm based on modularity density maximization . investigating its performance on erds - rnyi and heterogeneous random networks, we showed that it correctly identifies them as containing no actual communities .moreover , our algorithm outperforms the other existing modularity - density - based method on every benchmark network that we tested .the high level of accuracy it reaches , its low computational complexity , and the ability to properly identify networks with no ground - truth communities make it a powerful tool to investigate complex systems and extract meaningful information from the network representation of large data sets , giving it a broad range of application throughout the physical sciences . at the same time, we have also identified some limitations of modularity density that were not previously known .more specifically , we found that the theoretical maximum of modularity density for ring lattices and pure random trees does not correspond to the trivial partition , but rather to partitions with more than one community .we find this particularly intriguing , since erds - rnyi graphs are locally tree - like .thus , these results seem to suggest a certain relevance of long - distance links for a correct behaviour of modularity density .since most real - world networks are not pure trees or ring lattices , and indeed do feature shortcut links , we believe these limitations do not affect the suitability of modularity density and methods based on it in the analysis and modelling of complex systems .we will further investigate these limitations in future work .additionally , we will also extend this method to other types of networks , such as bipartite graphs , which require a redefinition of the concept of community itself .an implementation of our algorithm is freely available for download at www.fedebotta.com .fb acknowledges the support of uk epsrc ep / e501311/1 .cidg acknowledges support by eins , network of excellence in internet science , via the european commission s fp7 under communications networks , content and technologies , grant no .here , we provide a detailed description of the implementation of the algorithm presented above . to describe how the different steps are carried out ,first we introduce some notation .let be the size of the current partition .then , let be the partition adjacency matrix of the network , i.e. , the matrix whose elements are the number of links between community and community .also , let be the community spectra matrix , i.e. , the matrix whose elements are the number of links between node and nodes in community .finally , let be the -dimensional community size vector , whose elements are the sizes of the communities .note that our implementation uses three tolerance parameters : 1 .power method tolerance .this parameter determines the tolerance for the floating - point comparisons in the power method .2 . bisection tolerance .since a bisection with the leading eigenvector of the classical modularity matrix does not guarantee an increase in modularity density , we introduce a tolerance . after each bisection, we check the difference between the new and old values of modularity density .a bisection is accepted if modularity density increases or if it decreases by an amount smaller than ( more details are given in [ sec : bisection ] ) .3 . acceptance tolerance . this parameter defines the size of the tolerance range when finding the moves that maximally increase modularity density during tuning and agglomeration steps .the first step in the algorithm attempts to bisect a community ( see also a ) , which can be either the whole network or a previously determined community , using the traditional modularity matrix . to do so, we use the spectral method , which we briefly review here .the modularity matrix is defined as and the expression for the modularity of a given partition is since we are only considering a potential bisection , can only assume two values .thus , a partition can be represented by a vector whose entries are and if node is assigned to the first or the second community resulting from the split , respectively .then , substituting the expression in , it is the vector can be expressed in terms of the normalized eigenvectors of as where the are linear combination coefficients , and is the eigenvector of the modularity matrix , corresponding to the eigenvalue . substituting in , we obtain if we label the eigenvalues so that , this expression is maximized when is parallel to the leading eigenvector .however , is a vector whose entries can only be 1 .thus , we can only choose its elements to make it as parallel to as possible .one way of achieving this is to set if and if .then , the bisection consists in finding the leading eigenvector of and , if the corresponding eigenvalue is positive , dividing the nodes according to this rule .several metohds can be used to diagonalize .since we only need to find a single eigenvector , and this step only provides a starting guess , we choose to use the power method , which offers a good tradeoff between speed and accuracy .further consideration must be given to the fact that we are performing a bisection based on the modularity matrix , whereas our aim is to maximize modularity density .the potential problem is that a bisection based on modularity might not result in a larger value of modularity density . to avoid this, we introduce a tolerance parameter , whose role is to determine the largest possible decrease in modularity density that we want to accept when bisecting . in other words , if after the bisection the modularity density of the new partition has decreased by a value larger than , we do not accept the split , and keep the original partition .we consider only one exception to this rule , namely the first iteration of the bisection . at the start of the algorithm, all nodes are placed together and we try to bisect the whole network . at this point , we accept any bisection in order to allow at least a whole iteration of the whole algorithm . indeed ,if we did nt accept that , both the tuning and agglomeration steps could not be executed , thus leaving the network not partitioned .note that not partitioning the network could be the correct answer , but we want to make sure that we have considered other partitions as well at least once .if not partitioning the network is the best answer , this will be found by the agglomeration step , that will merge all the communities together . moving from community to community .,scaledwidth=75.0% ]finally , we note that the previous expression for is correct only when considering the whole network . when trying to partition a single community which does not contain all the nodes , we need to construct an sub - modularity matrix whose elements are where is the degree of node within the community . using this matrix, we then perform the bisection step as described above . in algorithm [ alg : bisection ], we present a detailed description of the implementation of this step . for each community, the computation of the leading eigenvalue through the power method requires steps .thus , the worst - case complexity of the the bisection step is . flag first bisection 1 1 current number of nodes ] construct leading , leading power method(b ) flag bisection bisection( , current nodes labels , current number of nodes ) cancel bisection flag\gets1 ] flag fine tuning 1 the crucial part of both the fine tuning and final tuning steps is that they try to move individual nodes to different communities ( see also b and c ) .thus , we need to consider what happens to the current partition and how , and change when we move a node from community to community . provides an intuitive scheme to illustrate the changes that follow from such a move . in general , both the number of internal and external links of will change , since node is leaving this community . however , to correctly update the modularity density , we also need to keep track of the changes in all the specific numbers of links between and every other community in the current partition .similarly , we need to ensure that the internal and external links of are updated correctly . finally , the sizes of the two communities changes as well as a consequence of the move .below , we describe how to efficiently perform these updates .the partition adjacency matrix keeps track of the number of edges between each pair of communities , as well as the internal number of edges of each community in its diagonal elements .looking at , one can see that the following quantities change : * the number of internal links of the community that node is leaving decreases by the internal degree of node , which is the number of links it has to other nodes in .* the number of internal links of the community that node is moving to increases by the number of links node has with other nodes in . * the number of links between the old and the new community of node increases by the number of links between and its old community , and decreases by the number of links between and its new community . * the number of links between the old community and all the other communities decreases by the number of links between and nodes in . * the number of links between the new community and all the other communities increases by the number of links between and nodes in . in formulae : where we dropped the repeated index for the diagonal elements of to keep the notation consistent .the rows of the matrix are the community spectra of the nodes , containing the numbers of links that each node forms with nodes in all the individual communities in the current partition . when a node changes community , its community spectrum does not change .however , every neighbour of will experience a change in the number of connections it has to nodes in the old and new communities of .in particular , in moving node from to , the following changes happen : * since is no longer in community , all the nodes connected to have one link less to . * since is now in community , all the nodes connected to have one connection more to . in formulae: updates to this vector are straightforward : flag increase 1 flag increase 0 \gets ] 1 fine tuning tree\gets ] fine tuning tree find all steps within of step in fine tuning tree pick randomly step with perform all updates in fine tuning tree until the chosen step flag increase since is defined as a sum over all current communities , we consider the terms in its expression separately , and show how they change when node moves from community to community .we first look at what happens to the contributions of a community different from and . in this case, the only changes happen for two terms in the internal sum : then , we consider the contribution of community : finally , we consider the contribution of community : flag increase 1 flag increase 0 [\bar{c}]\gets ] 1 final tuning tree\gets ] final tuning tree find all steps within of step in final tuning tree pick randomly step with perform all updates in final tuning tree until the chosen step flag increase in algorithm [ alg : fine ] and algorithm [ alg : final ] , we present a detailed description of the implementation of the tuning steps . the complexity of computing the potential change in modularity density is , since we have to consider all the communities to update the split penalty term. for the fine tuning , this process is repeated times per node , yielding a complexity of . in the final tuning , instead , all communities are considered as potential targets , introducing an extra factor of in the complexity , which becomes .note that these are worst case scenarios , since we typically do not have to consider all communities for the updates , because each node is only connected to a subset of them .the agglomeration step attempts the merger of pairs of communities ( see also d ) .if a merger is carried out , a community is obtained whose size is the sum of the sizes of the original ones .a delicate point is deciding the label of the new community . in our implementation, we always keep the smaller of the two labels .so , for instance ,if we merge community 1 with community 4 , the resulting community will be labelled 1 and community 4 will disappear .we then need to reassign the links of every node in the network to the new community , and also zero any link to the old community that disappeared .below , we describe how to efficiently perform the required updates , assuming a merger between community and community in which the label of the resulting community is .the following changes happen to the partition adjacency matrix : * the number of internal links of the merged community is the sum of the internal links of the two original ones plus the number of links between the two . * all the links of community vanish , since it has been merged with community .* the number of links between the new community and any other community is the sum of the number of links between each of the two original communities and . in formulae :flag increase 1 flag increase 0 [\hat{c}]\gets ] 1 agglomeration tree\gets ] agglomeration tree step in agglomeration tree picks step with and smallest number of communities perform all updates in agglomeration tree until the chosen step flag increase the number of connections between every node and the merged community is the sum of the number of links between and each of the two original communities , and no node is connected to community since it does nt exist any more : the changes to the community size vector are once again straightforward : as before , we consider the terms in the definition of modularity density separately , showing how they change for the merger considered . for the contribution of communities other than and , the only changes happen in two terms in the internal sum : flag repetition flag repetition flag repetition 1 flag repetition 0 then , we consider the contribution of community : finally , the contribution of community entirely vanishes. in algorithm [ alg : agglom ] , we present a detailed description of the implementation of the agglomeration step .the computational complexity is .analogously to the tuning steps , this is the worst case scenario . in a typical situation ,a community is only connected to a few others , and thus one does not need to update all the terms in the partition adjacency matrix . finally , in algorithm [ alg : commdet ] we provide a detailed description of how the steps presented above are linked together in our community detection algorithm .the overall complexity of the algorithm is dominated by the final tuning step , which is the most computationally expensive , with a complexity . along the lines of , we consider a constant , and thus the worst - case complexity reduces to .to minimize running times , we take advantage of the independence of the incremental computing steps .both the fine tuning and final tuning try to move nodes from one community to a different one .the calculations of the potential change in modularity density are independent of each other and thus can be performed in parallel , rather than serially .this task is fairly straightforward , and our implementation exploits the widely used c library _open mp _ to allow an efficient parallelization using multiple threads on each computing node during the tuning and agglomeration steps .99 albert r and barabsi a - l , _ statistical mechanics of complex networks _ , 2002 _ rev .phys . _ * 74 * 47 newman m e j , _ structure and function of complex networks _ , 2003 _ siam rev ._ * 45 * 167 boccaletti s , latora v , moreno y , chavez m and hwang d - u , _ complex networks : structure and dynamics _ , 2006 _ phys . rep . _ * 424 * 175 wasserman s and faust k , _ social network analysis methods and applications _ 1994 ( cambridge : cambridge university press ) scott j _ social network analysis : a handbook _ , 2000 ( london : sage )newman m e j _ the structure of scientific collaboration networks _ , 2001 _ proc. natl .usa _ * 98 * 404 lazer d _ etal . _ , computational social science , 2009 _ science _ * 323 * 721 vespignani a , _ predicting the behaviour of techno - social systems _ , 2009 _ science _ * 325 * 425 williams r j and martinez n d , _ simple rules yield complex food webs _ , 2000 _ nature _ * 404 * 180 jeong h , tombor b , albert r , oltvai z n and barabsi a - l , _ the large - scale organization of metabolic networks _ , 2000 _ nature _ * 407 * 651 trevio s iii , sun y , cooper t f and bassler k e , _ robust detection of hierarchical communities from escherichia coli gene expression data _ , 2012 _ plos comp_ * 8 * e1002391 johnson s , domnguez - garca v , donetti l and muoz m a , _ trophic coherence determines food - web stability _ , 2014 _ proc .usa _ * 111 * 17923 albert r , jeong h and barabsi a - l , _ internet : diameter of the world - wide web_ , 1999 _ nature _ * 401 * 130 saramki j and moro e , _ from seconds to months : an overview of multi - scale dynamics of mobile telephone calls _ , 2015 _ eur . phys .j. b _ * 88 * 164 boccaletti s , bianconi g , criado r , del genio c i , gmez - gardees j , romance m , sendia - nadal i , wang z and zanin m , _ the structure and dynamics of multilayer networks _, 2014 _ phys ._ * 544 * 1 kivel m , arenas a , barthlemy m , gleeson j p , moreno y and porter m , _ multilayer networks _ , 2014 _ j. compl .netw . _ * 2 * 203 milgram s , _ the small - world problem _ , 1967 _ psychol . today_ * 1 * 60 de sola pool i and kochen m , _ contacts and influence _ , 1978 _ social networks _ * 1 * 5 watts d and strogatz s h , _ collective dynamics of small - world networks _ , 1998 _ nature _ * 393* 440 amaral l a n , scala a , barthlemy m and stanley h e , _ classes of small - world networks _ , 2000 _ proc .usa _ * 97 * 1114911152 barabsia - l and albert r , _ emergence of scaling in random networks _ , 1999_ science _ * 286 * 509 del genio c i , gross t and bassler k e , _ all scale - free networks are sparse _ , 2011 _ phys .lett . _ * 107 * 178701 newman m e j , strogatz s h and watts d j , _ random graphs with arbitrary degree distributions and their applications _, 2001 _ phys . rev .e _ * 64 * 026118 del genio c i and house t , _ endemic infections are always possible on regular networks _ , 2013 _ phys. rev .e _ * 88 * 040801(r ) johnson s , torres j j , marro j and muoz m a , _ entropic origin of disassortativity in complex networks _ , 2010 _ phys .lett . _ * 104 * 108702 williams o and del genio c i , _ degree correlations in directed scale - free networks _ , 2014 _ plos one _ * 9 * e110121 newman m e j , _ assortative mixing in networks _ , 2002 _ phys. rev .lett . _ * 89 * 208701 del genio c i , romance m , criado r and boccaletti s , _ synchronization in dynamical networks with unconstrained structure switching _ , 2015 _ phys . rev .e _ * 92 * 062819 girvan m and newman m e j , _ community structure in social and biological networks _ , 2002 _ proc .usa _ * 99 * 7821 pimm s l , _ structure of food webs _ , 1979 _ theor .* 16 * 144 garnett g p , hughes j p , anderson r m , stoner b p , aral s o , whittington w l , handsfield h h and holmes k k , _ sexual mixing patterns of patients attending sexually transmitted diseases clinics _, 1996 _ sex ._ * 23 * 248 flake g w , lawrence s , giles c l and coetzee f m , _ self - organization and identification of web communities _ , 2002 _ computer _ * 32 * 66 eriksen k a , simonsen i , maslov s and sneppen k , _ modularity and extreme edges of the internet _ , 2003 _ phys ._ * 90 * 148701 krause a e , frank k a , mason d m , ulanowicz r e and taylor w w , _ compartments revealed in food - web structure _ , 2003 _ nature _ * 426 * 282 lusseau d and newman m e j , _ identifying the role that animals play in their social networks _ , 2004 _b bio . _ * 271 * s477 guimer r and amaral l a n , _ functional cartography of complex metabolic networks _ , 2005 _ nature _ * 433 * 895 palla g , dernyi i , farkas i. and vicsek t , _ uncovering the overlapping community structure of complex networks in nature and society _ , 2005 _ nature _ * 435 * 814 arenas a , daz - guilera a and prez - vicente c j , _ synchronization reveals topological scales in complex networks _, 2006 _ phys . rev .lett . _ * 96 * 114102 restrepo j g , ott e and hunt b r , _ characterizing the dynamical importance of network nodes and links _ , 2006 _ phys .lett . _ * 97 * 094102 huss m and holme p , _ currency and commodity metabolites : their identification and relation to the modularity of metabolic networks _ , 2007 _ iet syst .* 1 * 280 blondel v d , guillaume j l , lambiotte r. and lefebvre e , _ fast unfolding of communities in large networks _ , 2008 _ j. stat. mech . - theory e. _p10008 del genio c i and gross t , _ emergent bipartiteness in a society of knights and knaves _ , 2011 _ new j. phys ._ * 12 * 103038 danon l , daz - guilera a , duch j and arenas a , _ comparing community structure identification _ , 2005 _ j. stat .mech - theory e. _p09008 lancichinetti a , fortunato s and radicchi f , _ benchmark graphs for testing community detection algorithms _, 2008 _ phys .e _ * 78 * 046110 fortunato s , _ community structure in graphs _ , 2010 _ phys ._ * 486 * 75 mucha p j , richardson t , macon k , porter m a and onnela j p , _ community structure in time - dependent , multiscale , and multiplex networks _ , 2010 _ science _ * 328 * 876 steinhaeuser k and chawla n v , _ identifying and evaluating community structure in complex networks _, 2010 _ pattern recognit .lett . _ * 31 * 413 decelle a , krzakala f , moore c and zdeborov l , _ inference and phase transitions in the detection of modules in sparse networks _ , 2011 _ phys .lett . _ * 107 *065701 peixoto t p , _ parsimonious module inference in large networks _, 2013 _ phys .lett . _ * 110 * 148701 trevio s iii , nyberg a , del genio c i and bassler k e , _ fast and accurate determination of modularity and its effect size _ , 2015 _ j. stat .. - theory e. _ p02003 newman m e j and peixoto t p , _ generalized communities in networks _ , 2015 _ phys .lett . _ * 115 * , 0888701 newman m e j , _ modularity and community structure in networks _ , 2006 _ proc .usa _ * 103 *8577 chen m , nguyen t and szymanski b k , _ a new metric for quality of network community structure _ , 2013 _ ase human j. _ * 2 * 226 chen m , kuzmin k and szymanski b k , _ community detection via maximization of modularity and its variants _ , 2014 _ ieee trans . comp .syst . _ * 1 * 46 fortunato s and barthlemy m , _ resolution limit in community detection _ , 2006 _ proc .usa _ * 104 * 36 arenas a , fernandez a and gomez s , _ analysis of the structure of complex networks at different resolution levels _ , 2008 _ new j. phys . _ * 10 * 053039 lancichinetti a and fortunato s , _ limits of modularity maximization in community detection _ , 2011 _ phys .e _ * 84 * 066122 traag v a , van dooren p and nesterov y , _ narrow scope for resolution - limit - free community detection _ , 2011 _ phys .e _ * 84 * 016114 kernighan b and lin s , _ an efficient heuristic procedure for partitioning graphs _, 1970 _ bell syst . tech .j. _ * 49 * 291 del genio c i , kim h , toroczkai z and bassler k e , _ efficient and exact sampling of simple graphs with given arbitrary degree sequence _ , 2010 _ plos one _ * 5 * e10012 zachary w w , _ an information flow model for conflict and fission in small groups _ , 1977 _ j. anthropol .res . _ * 33 * 452
|
many real - world complex networks exhibit a community structure , in which the modules correspond to actual functional units . identifying these communities is a key challenge for scientists . a common approach is to search for the network partition that maximizes a quality function . here , we present a detailed analysis of a recently proposed function , namely modularity density . we show that it does not incur in the drawbacks suffered by traditional modularity , and that it can identify networks without ground - truth community structure , deriving its analytical dependence on link density in generic random graphs . in addition , we show that modularity density allows an easy comparison between networks of different sizes , and we also present some limitations that methods based on modularity density may suffer from . finally , we introduce an efficient , quadratic community detection algorithm based on modularity density maximization , validating its accuracy against theoretical predictions and on a set of benchmark networks . _ keywords _ : complex networks , community detection , network algorithms , modularity density
|
the framework we propose is based on a complex networks approach to quantify physiologic interactions between diverse physiologic systems , where network nodes represent different physiologic systems and network links indicate the dynamical interaction ( coupling ) between systems .this framework allows to quantify the topology and the associated dynamics in the links strength of physiologic networks during a given physiologic state , taking into account the signal output of individual physiologic systems as well as the interactions among them , and to track the evolution of multiple interconnected systems undergoing transitions from one physiologic state to another ( fig .[ dynamicalnetwork ] ) .we introduce the concept of time delay stability ( tds ) to identify and quantify dynamic links among physiologic systems .we study the network of interactions for an ensemble of key integrated physiologic systems ( cerebral , cardiac , respiratory , ocular and muscle activity ) .we consider different sleep stages ( deep , light , rapid eye movement ( rem ) sleep and quiet wake ) as examples of physiologic states .while earlier studies have identified how sleep regulation influences aspects of the specific control mechanism of individual physiologic systems ( e.g. , cardiac or respiratory ) or have focused on the organization of functional connectivity of eeg networks during sleep and under neurological disorders such as epilepsy , the dynamics and topology of a physiologic network comprised of diverse systems have not been studied so far .further , the relation between network topology and function , and how it changes with transitions across distinct physiologic states is not known .we demonstrate that sleep stages are associated with markedly different networks of physiologic interactions ( fig .[ tds - matrices ] ) characterized by different number and strength of links ( figs .[ histograms ] and [ brain ] ) , by different rank distributions ( fig .[ rankplots ] ) , and by specific node connectivity ( fig .[ hubs ] ) .traditionally , differences between sleep stages are attributed to modulation in the sympatho - vagal balance with dominant sympathetic tone during wake and rem : spectral , scale - invariant and nonlinear characteristics of the dynamics of individual physiologic systems indicate higher degree of temporal correlations and nonlinearity during wake and rem compared to non - rem ( light and deep sleep ) where physiologic dynamics exhibit weaker correlations and loss of nonlinearity .in contrast , the network of physiologic interactions shows a completely different picture : the network characteristics during light sleep are much closer to those during wake and very different from deep sleep ( figs .[ tds - matrices ] and [ histograms ] ) .specifically , we find that network connectivity and overall strength of physiologic interactions are significantly higher during wake and light sleep , intermediate during rem and much lower during deep sleep .thus , our empirical observations indicate that while sleep - stage related modulation in sympatho - vagal balance plays a key role in regulating individual physiologic systems , it does not account for the physiologic network topology and dynamics across sleep stages , showing that the proposed framework captures principally new information . for both quantities when comparing rem and deep sleep with wake and light sleep ) .there is no significant difference between wake and light sleep ( ) .this pattern is even more pronounced for the subnetwork formed by the brain - periphery and periphery - periphery links shown in * ( c ) * and * ( d ) * ( for both quantities when comparing rem and deep sleep with wake and light sleep ) .in contrast , the number of brain - brain links remains practically unchanged with sleep - stage transitions * ( e ) * , and the average brain - brain link is 5 times stronger in all sleep stages compared to the other network links * ( f)*. the group - averaged patterns in the number of network links and in the average link strength across sleep stages ( black bars ) are consistent with the behavior observed for individual subjects ( red bars in all panels represent the same subject ) .the group - averaged number of links for each sleep stage is obtained from the corresponding group - averaged network in fig .[ tds - matrices ] .the average link strength is measured in % tds and is obtained by taking the mean of all elements in the tds matrix for each sleep stage ( fig .[ tds - matrices ] ) ; it represents the average strength of all links in a network obtained from a given subject during a specific sleep stage which then is averaged over all subjects .error bars indicate standard deviation obtained from a group of 36 subjects ( methods ) . ] to quantify the interaction between physiologic systems and to probe how this interaction changes in time under different physiologic conditions we study the time delay with which modulations in the output dynamics of a given physiologic system are consistently followed by corresponding modulations in the signal output of another system .periods of time with approximately constant time delay indicate a stable physiologic interaction , and stronger coupling between physiologic systems results in longer periods of time delay stability ( tds ) . utilizing the tds methodwe build a dynamical network of physiologic interactions , where network links between physiological systems ( considered as network nodes ) are established when the time delay stability representing the coupling of these systems exceeds a significance threshold level , and where the strength of the links is proportional to the percentage of time for which time delay stability is observed ( methods ) .we apply this new approach to a group of healthy young subjects ( methods ) .we find that the network of interactions between physiologic systems is very sensitive to sleep - stage transitions . in a short time window of just a few minutes the network topology can dramatically change from only a few links to a multitude of links ( fig .[ dynamicalnetwork ] ) indicating transitions in the global interconnectivity between physiological systems .these network transitions are not associated with random occurrence or loss of links but are characterized by certain organization in network topology where given links between physiological systems remain stable during the transition while others do not e.g. , brain - brain links persist during the transition from deep sleep to light sleep while brain - periphery links significantly change ( fig .[ dynamicalnetwork]c ) .further , we find that sleep - stage transitions are paralleled by abrupt jumps in the total number of links leading to higher or lower network connectivity ( fig .[ dynamicalnetwork]c , d ) .these network dynamics are observed for each subject in the database , where consecutive episodes of sleep stages are paralleled by a level of connectivity specific for each sleep stage , and where sleep - stage transitions are consistently followed by transitions in network connectivity throughout the course of the night ( fig . [ dynamicalnetwork]d ) .indeed , the network of physiologic interactions exhibits a remarkable responsiveness as network connectivity changes even for short sleep - stage episodes ( arrows in fig .[ dynamicalnetwork]d ) , demonstrating a robust relation between network topology and function .this is the first observation of a real network evolving in time and undergoing topological transitions from one state to another . to identify the characteristic network topology for each sleep stage we obtain group - averaged time delay stability matrices , where each matrix element represents the percentage of time with stable time delay between two physiological systems , estimated over all episodes of a given sleep stage throughout the night .matrix elements above a threshold of statistical significance ( fig .[ threshold - def ] , methods ) indicate stable interactions between physiologic systems represented by network links ( fig .[ tds - matrices ] ) .we find that matrix elements greatly vary for different sleep stages with much higher values for wake and light sleep , lower values for rem and lowest for deep sleep .this is reflected in higher network connectivity for wake and light sleep , lower for rem and significantly reduced number of links during deep sleep ( fig .[ histograms]a ) .further , the tds matrices indicate separate subgroups of interactions between physiologic systems brain - periphery , periphery - periphery and brain - brain interactions that are affected differently during sleep stages and form different sub - networks .specifically , matrix elements representing interactions between peripheral systems ( cardiac , respiratory , chin , eye , leg ) and the brain as well as interactions among the peripheral systems are very sensitive to sleep - stage transitions , leading to different network topology for different sleep stages ( fig .[ tds - matrices ] ) .we find sub - networks with high number of brain - periphery and periphery - periphery links during wake and light sleep , lower number of links during rem and a significant reduction of links at deep sleep ( fig .[ histograms]c ) .in contrast , matrix elements representing brain - brain interactions form a subnetwork with the same number of brain - brain links ( fig .[ histograms]e ) , and stable topology consistently present in the physiologic network during all sleep stages ( fig .[ tds - matrices ] ) .sleep - stage related transitions in network connectivity and topology are not only present in the group - averaged data but also in the physiologic networks of individual subjects , suggesting universal behavior ( fig .[ tds - matrices ] ) .notably , we find a higher number of brain - periphery links during rem compared to deep sleep despite inhibition of motoneurons in the brain leading to muscle atonia during rem .the empirical observations of significant difference in network connectivity and topology during light sleep compared to deep sleep are surprising , given the similarity in spectral , scale - invariant and nonlinear properties of physiologic dynamics during light sleep and deep sleep ( both stages traditionally classified as non - rapid eye movement sleep ( nrem ) ) , and indicate that previously unrecognized aspects of sleep regulation may be involved in the control of physiologic network interactions .+ + networks with identical connectivity and topology can exhibit different strength of their links . network link strength is determined as the fraction of time when tds is observed ( methods ) .we find that the average strength of network links changes with sleep - stage transitions : network links are significantly stronger during wake and light sleep compared to rem and deep sleep a pattern similar to the behavior of the network connectivity across sleep stages ( fig .[ histograms]a , b ) .further , subnetworks of physiologic interactions exhibit different relationship between connectivity and average link strength .specifically , the subnetwork of brain - periphery and periphery - periphery interactions is characterized by significantly stronger links ( and also higher connectivity ) during wake and light sleep , and much weaker links ( with lower network connectivity ) during deep sleep and rem ( fig .[ histograms]c , d ) .in contrast , the subnetwork of brain - brain interactions exhibits very different patterns for the connectivity and the average link strength while the group average subnetwork connectivity remains constant across sleep stages , the average link strength varies with highest values during light sleep and deep sleep , and a dramatic decline during rem .the observation of significantly stronger links in the brain - brain subnetwork during nrem compared to rem sleep is consistent with the characteristic of nrem as eeg - synchronized sleep and rem as eeg - desynchronized sleep . during nrem sleep adjacent cortical neurons firesynchronously with a relatively low frequency rhythm leading to coherence between frequency bands in the eeg signal , and thus to stable time delays and strong network links ( fig .[ histograms]f ) .in contrast , during rem sleep cortical neurons are highly active but fire asynchronously , resulting in weaker links ( fig .[ histograms]f ) .our findings of stronger links in the brain - brain subnetwork during non - rem sleep ( fig .[ histograms]f and fig .[ brain ] ) indicate that bursts ( periods of sudden temporal increase ) in the spectral power of one eeg - frequency band are consistently synchronized in time with bursts in a different eeg - frequency band , thus leading to longer periods of time delay stability and correspondingly stronger network links .this can explain some seemingly surprising network links for example , we find a strong link between and brain activity during non - rem sleep ( fig .[ tds - matrices ] ) although waves are greatly diminished and waves are dominant . since the spectral densities of both waves are normalized before the tds analysis ( methods ) , the presence of a stable - link indicates that a relative increase in the spectral density in one wave is followed , with a stable time delay , by a corresponding increase in the density of the other wave an intriguing physiologic interaction which persists not only during deep sleep but is also present in light sleep , rem and quiet wake ( fig .[ tds - matrices ] ) .notably , the average link strength of the brain - brain subnetwork is by a factor of higher compared to all other links in the physiologic network ( fig .[ histograms]d , f ) .the finding of completely different sleep - stage stratification patterns in key network properties of the brain - brain subnetwork compared to the periphery - periphery / brain - periphery subnetworks suggests a very different role these sub - networks play in coordinating physiologic interactions during sleep .the similarity in the brain - brain subnetwork during deep sleep and light sleep indicates that the proposed tds approach is sensitive to quantify synchronous slow - wave brain activity during nrem sleep that leads to stronger brain - brain links during light sleep and deep sleep ( 50 - 60% tds ) compared to rem ( 35% tds ) , as shown in ( fig .[ histograms]f and fig .[ brain ] ) .the significant difference between light sleep and deep sleep observed for the periphery - periphery / brain - periphery subnetwork in the number of links ( t - test : ) as well as in the average link strength ( t - test : ) , indicates that the interactions between physiologic dynamics outside the brain are very different during these sleep stages .our finding that the average link strength exhibits a specific stratification pattern across sleep stages ( fig .[ histograms ] ) raises the question whether the underlying distribution of the network links strength is also sleep - stage dependent . to this endwe probe the relative strength of individual links , and we obtain the rank distribution of the strength of network links for each sleep stage averaged over all subjects in the group ( fig .[ rankplots]a ) .we find that the rank distribution corresponding to deep sleep is vertically shifted to much lower values for the strength of the network links , while the rank distribution for light sleep and wake is for all links consistently higher than the distribution for rem .thus , the sleep - stage stratification pattern we find for the average strength of the network links ( fig . [ histograms]d ) originates from the systematic change in the strength of individual network links with sleep - stage transitions .notably , while the strength of individual network links changes significantly with sleep stages , the rank order of the links does not significantly change .after rescaling the rank distributions for light sleep and rem ( by horizontal and vertical shifts ) , we find that they collapse onto the rank plots of deep sleep and wake respectively , following two distinct functional forms : a slow and smoothly decaying rank distribution for rem and wake , and a much faster decaying rank distribution for deep sleep and light sleep with a characteristic plateau in the mid rank range indicating a cluster of links with similar strength ( fig .[ rankplots]b ) .we note that , although the form of the rank distributions for deep sleep and light sleep as well as for wake and rem are respectively very similar , the average strength of the links is significantly different between deep sleep and light sleep and between wake and rem ( fig .[ histograms]d ) .our observations that physiologic networks undergo dynamic transitions where key global properties significantly change with sleep - stage transitions , raise the question whether local topology and connectivity of individual network nodes also change during these transitions . considering each physiologic system ( network node ) separately, we examine the number and strength of all links connecting the system with the rest of the network .specifically , we find that the cardiac system is highly connected to other physiologic systems in the network during wake and light sleep ( fig .[ hubs ] ) .in contrast , during deep sleep we do not find statistically significant time delay stability in the interactions of the cardiac system , which is reflected by absence of cardiac links ( fig .[ hubs ] ) .further , we find that the average strength of the links connected to the cardiac system also changes with sleep stages : stronger interactions ( high % tds ) during wake and light sleep , and significantly weaker interactions below the significance threshold during deep sleep ( fig .[ hubs ] ) .such ` isolation ' of the cardiac node from the rest of the network indicates a more autonomous cardiac function during deep sleep also supported by earlier observations of breakdown of long - range correlations and close to random behavior in heartbeat intervals in this sleep stage .transition to light sleep , rem and wake , where the average link strength and connectivity of the cardiac system is significantly higher indicating increased interactions with the rest of the network , leads to correspondingly higher degree of correlations in cardiac dynamics .similarly , respiratory dynamics also exhibit high degree of correlations during rem and wake , lower during light sleep and close to random behavior during deep sleep .such transitions in the number and strength of links across sleep stages we also find for other network nodes ( fig . [ hubs ] ) . moreover , the sleep - stage stratification pattern in connectivity and average link strength for individual network nodes ( fig .[ hubs ] ) is consistent with the pattern we observe for the entire network ( fig .[ histograms ] ) .our findings of significant reduction in the number and strength of brain - periphery and periphery - periphery links in the corresponding sub - networks during deep sleep indicate that breakdown of cortical interactions , previously reported during deep sleep , may also extend to other physiologic systems under neural regulation . indeed, the low connectivity in the physiologic network we find in deep sleep may explain why people awakened during deep sleep do not adjust immediately and often feel groggy and disoriented for a few minutes .this effect is not observed if subjects are awakened from light sleep when we find the physiologic network to be highly connected ( fig .[ tds - matrices ] ) .further , since risk of predation modifies sleep architecture and since abrupt awakening from deep sleep is associated with increased sleep inertia , higher sensory threshold , and impaired sensory reaction and performance that may lead to increased vulnerability , the fact that deep sleep ( lowest physiologic network connectivity ) dominates at the beginning of the night and not close to dawn , when many large predators preferably hunt , may have been evolutionarily advantageous .introducing a framework based on the concept of tds we identify a robust network of interactions between physiologic systems , which remains stable across subjects during a given physiologic state .further , changes in the physiologic state lead to complex network transitions associated with a remarkably structured reorganization of network connectivity and topology that simultaneously occurs in the entire network as well as at the level of individual network nodes , while preserving the hierarchical order in the strength of individual network links . such network transitions lead to the formation of sub - networks of physiologic interactions with different topology and dynamical characteristics . in the context of sleep stages ,network transitions are characterized by a specific stratification pattern where network connectivity and link strength are significantly higher during light sleep compared to deep sleep and during wake compared to rem .this can not be explained by the dynamical characteristics of the output signals from individual physiologic systems which are similar during light sleep and deep sleep as well as during wake and rem .the dramatic change in network structure with transition from one physiologic state to another within a short time window indicates a high flexibility in the interaction between physiologic systems in response to change in physiologic regulation . such change in network structure in response to change in the mechanisms of control during different physiologic states suggests that our findings reflect intrinsic features of physiologic interaction . the observed stability in network topology and rank order of links strength during sleep stages , and the transitions in network organization across sleep stages provide new insight into the role which individual physiologic systems as well as their interactions play during specific physiologic states .while our study is limited to a data - driven approach these empirical findings may facilitate future efforts on developing and testing network models of physiologic interaction .this system - wide integrative approach to individual systems and the network of their interactions may facilitate the emergence of a new dimension to the field of systems physiology that will include not only interactions within but also across physiologic systems . in relation to critical clinical care , where multiple organ failure is often the reason for fatal outcome , our framework may have practical utility in assessing whether dynamical links between physiologic systems remain substantially altered even when the function of specific systems is restored after treatment .while we demonstrate one specific application , the framework we develop can be applied to a broad range of complex systems where the tds method can serve as a tool to characterize and understand the dynamics and function of real - world heterogeneous and interdependent networks .the established relation between dynamical network topology and network function has not only significant medical and clinical implications , but is also of relevance for the general theory of complex networks .we analyze continuously recorded multi - channel physiologic data obtained from 36 healthy young subjects ( 18 female , 18 male , with ages between 20 - 40 , average 29 years ) during night - time sleep ( average record duration is 7.8 hours ) .this allows us to track the dynamics and evolution of the network of physiologic interactions during different sleep stages and sleep - stage transitions ( fig .[ dynamicalnetwork ] ) .we focus on physiologic dynamics during sleep since sleep stages are well - defined physiological states , and external influences due to physical activity or sensory inputs are reduced during sleep .sleep stages are scored in 30 sec epochs by sleep lab technicians based on standard criteria .in particular , we focus on the electroencephalogram ( eeg ) , the electrocardiogram ( ecg ) , respiration , the electrooculogram ( eog ) , and the electromyogram ( emg ) of chin and leg . in order to compare these very different signals with each other and to study interrelations between them ,we extract the following time series from the raw signals : the spectral power of five frequency bands of the eeg in moving windows of 2 sec with a 1 sec overlap : waves ( 0.5 - 3.5 hz ) , waves ( 4 - 7.5 hz ) , waves ( 8 - 11.5 hz ) , waves ( 12 - 15.5 hz ) , waves ( 16 - 19.5 hz ) ; the variance of the eog and emg signals in moving windows of 2 sec with a 1 sec overlap ; heartbeat rr - intervals and interbreath intervals are both re - sampled to 1 hz ( 1 sec bins ) after which values are inverted to obtain heart rate and respiratory rate .thus , all time series have the same time resolution of 1 sec before the tds - analysis is applied. 7% tds ( marked by a vertical dashed line ) all network links ( 100% ) are statistically significant .periphery - periphery and brain - periphery links during all sleep stages are considered when determining this threshold .statistical significance of a specific physiologic link is estimated by comparing the strength distribution of this link across all subjects in the group with a distribution of surrogate links representing `` interactions '' between the same systems paired from different subjects ) .based on this surrogate test , a p - value obtained from the student t - test indicates statistically significant strength of a given link . ] utilizing sleep data as an example we demonstrate that a network approach to physiologic interactions is necessary to understand how modulations in the regulatory mechanism of individual systems translate into reorganization of physiologic interactions across the human organism .integrated physiologic systems are coupled by feedback and/or feed forward loops with a broad range of time delays .to probe physiologic coupling we propose an approach based on the concept of time delay stability : in the presence of stable / strong interactions between two systems , transient modulations in the output signal of one system lead to corresponding changes that occur with a stable time lag in the output signal of another coupled system .thus , long periods of constant time delay indicate strong physiologic coupling .the tds method we developed for this study consists of the following steps : ( 1 . ) to probe the interaction between two physiologic systems x and y , we consider their output signals and each of length .we divide both signals and into overlapping segments of equal length sec .we choose an overlap of sec which corresponds to the time resolution of the conventional sleep - stage scoring epochs , and thus - 1 $ ] .prior to the analysis , the signal in each segment is normalized separately to zero mean and unit standard deviation , in order to remove constant trends in the data and to obtain dimensionless signals .this normalization procedure assures that the estimated coupling between the signals and is not affected by their relative amplitudes .next , we calculate the cross - correlation function , , within each segment by applying periodic boundary conditions . for each segment we define the time delay to correspond to the maximum in the absolute value of the cross - correlation function in this segment .time periods of stable interrelation between two signals are represented by segments of approximately constant ( light shade region in fig .[ dynamicalnetwork]b ) in the newly defined series of time delays , .in contrast , absence of stable coupling between the signals corresponds to large fluctuations in ( dark shade region in fig .[ dynamicalnetwork]b ) .we identify two systems as linked if their corresponding signals exhibit a time delay that does not change by more than sec for several consecutive segments .we track the values of along the series : when for at least four out of five consecutive segments ( corresponding to a window of sec ) the time delay remains in the interval [ , these segments are labeled as stable .this procedure is repeated for a sliding window with a step size one along the entire series .the % tds is finally calculated as the fraction of stable points in the time series .longer periods of tds between the output signals of two systems reflect more stable interaction / coupling between these systems .thus , the strength of the links in the physiologic network is determined by the percentage of time when tds is observed : higher percentage of tds corresponds to stronger links . to identify physiologically relevant interactions , represented as links in the physiologic network , we determine a significance threshold level for the tds based on comparison with surrogate data :only interactions characterized by tds values above the significance threshold are considered .the tds method is general , and can be applied to diverse systems .it is more reliable in identifying physiologic coupling compared to traditional cross - correlation and cross - coherence analyses ( fig .[ rank - ccf ] ) which are not suitable for heterogeneous and nonstationary signals , and are affected by the degree of auto - correlations in these signals .to compare interactions between physiologic systems which are very different in strength and vary with change of physiologic state ( e.g. , transitions across sleep stages ) , we define the significance threshold as the percent of tds for which all links included in the physiologic network are statistically significant . to identify statistical significance of a given link between two physiologic systems , we compare the distribution of tds values for this link obtained from all 36 subjects in our database with the distribution of tds values obtained for 100 surrogates of this link where the signal outputs from the same two physiologic systems taken from different subjects are paired for the analysis in order to eliminate the endogenous physiologic coupling .a student t - test was performed to determine the statistical significance between the two distributions .this procedure is repeated for all pairs of systems ( links ) in the network , and network links are identified as significant when the t - test p - value . the significance threshold level for tdsis then defined as the value above which all network links are statistically significant , and thus represent endogenous interactions between physiologic systems .we find that a threshold of approximately 7% tds is needed to identify networks of statistically significant links for all sleep stages ( fig .[ threshold - def ] ) . to confirm that the tds method captures physiologically relevant information about the endogenous interactions between systems, we perform a surrogate test where we pair physiologic signals from different subjects , thus eliminating physiologic coupling . applying the tds method to these surrogate data ,we obtain almost uniform rank distributions with significantly decreased link strength ( fig .[ rankplots]a ) due to the absence of physiologic interactions .further , all surrogate distributions conform to a single curve , indicating that the sleep - stage stratification we observe for the real data reflects indeed changes in physiologic coupling with sleep - stage transitions . in contrast, the same surrogate test applied to traditional cross - correlation analysis does not show a difference between the rank distributions from surrogate and real data ( fig .[ rank - ccf ] ) . )is preserved also for thresholds of 5% and 9% tds , indicating stability of the results .note , that the number of links in the brain - brain subnetwork remains unchanged for different sleep stages ( e , f ) , since the strength of all links in this subnetwork is well above 9% tds ( fig .[ histograms]f ) .] we find that the tds method is better suited than the traditional cross - correlation analysis in identifying networks of endogenous physiologic interactions .rank plots obtained from cross - correlation analysis ( fig .[ rank - ccf ] ) show that the cross - correlation strength ( global maximum of the cross - correlation function ) is consistently lower for all links during deep sleep , higher for light sleep and rem and highest during wake a stratification related to the gradual increase in the strength of autocorrelations in the signal output of physiologic systems , which in turn increases the degree of cross - correlations .surrogate tests based on pairs of signals from different subjects , where the coupling between systems is abolished but physiologic autocorrelations are preserved , show no statistical difference between the surrogate ( open symbols ) and original ( filled symbols ) rank distributions of , suggesting that in this context cross - correlations do not provide physiologically relevant information regarding the interaction between systems .indeed , even for uncoupled systems high autocorrelations in the output signals lead to spurious detection of cross - correlations .in contrast , the tds method is not affected by the autocorrelations surrogate rank plots for different sleep stages collapse and do not exhibit vertical stratification as shown in ( fig .[ rankplots]a ) . to test the robustness of the stratification pattern in network topology and connectivity across sleep stages ( shown in fig .[ tds - matrices ] and fig .[ histograms ] ) , we repeat our analyses for two additional thresholds : 5% tds and 9% tds . with increasing the threshold for tds from 5% to 9% the overall number of links in the network decreases ( compare fig .[ diff - thresholds]a , c , e with fig .[ diff - thresholds]b , d , f ) .however , the general sleep - stage stratification pattern is preserved with highest number of links during light sleep and wake , lower during rem , and significant reduction in network connectivity during deep sleep ( fig . [ diff - thresholds ] ) .the stability of the observed pattern in network connectivity for a relatively broad range around the significance threshold of 7% tds indicates that the identified network is a robust measure of physiologic interactions .we thank t. penzel for providing data and helpful comments , and a. y. schumann for help with data selection , data pre - processing and discussions . we acknowledge support from nih grant 1r01-hl098437 , the us - israel binational science foundation ( bsf grant 2008137 ) , the office of naval research ( onr grant 000141010078 ) , the israel science foundation , the european community ( projects daphnet / fp6 ist 018474 - 2 and socionical / fp7 ict 231288 ) and the brigham and women s hospital biomedical research institute fund .r.p.b . acknowledges support from the german academic exchange service ( daad fellowship within the postdoc - programme ) .a.b . and r.p.b .contributed equally to this work .r.p.b . , j.w.k . , s.h . and p.ch.i . designed research . a.b . and r.p.bwrote the algorithm ., r.p.b . and p.ch.i .analysed the data . r.p.b . and p.ch.i .wrote the paper with contributions from all .competing financial interests : the authors declare no competing financial interests . correspondence and requests for materials should be addressed to p.ch.i .( email : plamen.bu.edu or pivanov.org ) .url # 1`#1`[2]#2
|
the human organism is an integrated network where complex physiologic systems , each with its own regulatory mechanisms , continuously interact , and where failure of one system can trigger a breakdown of the entire network . identifying and quantifying dynamical networks of diverse systems with different types of interactions is a challenge . here , we develop a framework to probe interactions among diverse systems , and we identify a physiologic network . we find that each physiologic state is characterized by a specific network structure , demonstrating a robust interplay between network topology and function . across physiologic states the network undergoes topological transitions associated with fast reorganization of physiologic interactions on time scales of a few minutes , indicating high network flexibility in response to perturbations . the proposed system - wide integrative approach may facilitate the development of a new field , network physiology . physiologic systems under neural regulation exhibit high degree of complexity with nonstationary , intermittent , scale- invariant and nonlinear behaviors . moreover , physiologic dynamics transiently change in time under different physiologic states and pathologic conditions , in response to changes in the underlying control mechanisms . this complexity is further compounded by various coupling and feedback interactions among different systems , the nature of which is not well - understood . quantifying these physiologic interactions is a challenge as one system may exhibit multiple simultaneous interactions with other systems where the strength of the couplings may vary in time . to identify the network of interactions between integrated physiologic systems , and to study the dynamical evolution of this network in relation to different physiologic states , it is necessary to develop methods that quantify interactions between diverse systems . recent studies have identified networks with complex topologies , have focused on emergence of self - organization and complex network behavior out of simple interactions , on network robustness , and more recently on critical transitions due to failure in the coupling of interdependent networks . growth dynamics of structural networks have been investigated in network models , and in physical systems , and various structural and functional brain networks have been explored . however , understanding the relation between topology and dynamics of complex networks remains a challenge , especially when networks are comprised of diverse systems with different types of interaction , each network node represents a multicomponent complex system with its own regulatory mechanism , the output of which can vary in time , and when transient output dynamics of individual nodes affect the entire network by reinforcing ( or weakening ) the links and changing network topology . a prime example of a combination of all these network characteristics is the human organism , where integrated physiologic systems form a network of interactions that affects physiologic function , and where breakdown in physiologic interactions may lead to a cascade of system failures . + + we investigate the network of interactions between physiologic systems , and we focus on the topology and dynamics of this network and their relevance to physiologic function . we hypothesize that during a given physiologic state the physiologic network may be characterized by a specific topology and coupling strength between systems . further , we hypothesize that coupling strength and network topology may abruptly change in response to transition from one physiologic state to another . such transitions may also be associated with changes in the connectivity of specific network nodes , i.e. , the number of systems to which a given physiologic system is connected can change , forming sub - networks of physiologic interactions . probing physiologic network connectivity and the stability of physiologic coupling across physiologic states may thus provide new insights on integrated physiologic function . such a systems - wide perspective on physiologic interactions , tracking multiple components simultaneously , is necessary to understand the relation between network topology and function .
|
european option prices are usually quoted in terms of the corresponding implied volatility , and over the last decade a large number of papers ( both from practitioners and academics ) has focused on understanding its behaviour and characteristics .the most important directions have been towards ( i ) understanding the behaviour of the implied volatility in a given model ( see , , for instance ) and ( ii ) deciphering its behaviour in a model - independent way , as in , or . these results have provided us with a set of tools and methods to check whether a given parameterisation is free of arbitrage or not . in particular , given a set of observed data ( say european calls and puts for different strikes and maturities ) , it is of fundamental importance to determine a methodology ensuring that both interpolation and extrapolation of this data are also arbitrage - free .such approaches have been carried out for instance in , in and in .several parameterisations of the implied volatility surface have now become popular , in particular , and , albeit not ensuring absence of arbitrage . in the recent paper , gatheral andjacquier proposed a new class of svi implied volatility parameterisation , based on the previous works of gatheral .in particular they provide explicit sufficient and in a certain sense almost necessary conditions ensuring that such a surface is free of arbitrage .we shall recall later the exact definition of arbitrage , and see that it can be decomposed into butterfly arbitrage and calendar spread arbitrage .this new class depends on the maturity and can hence be used to model the whole volatility surface , and not a single slice .it also depends on the at - the - money total implied variance , and on a positive function such that the total variance as a function of time - to - maturity and log-(forward)-moneyness is given by , where is the classical ( normalised ) svi parameterisation from , and an asymmetry parameter ( essentially playing the role of the correlation between spot and volatility in stochastic volatility models ) . in this work ,we generalise their framework to volatility surfaces parameterised as for some ( general ) functions , , . we obtain ( sections [ sec_cal ] and [ sec_butter ] ) necessary and sufficient conditions coupling the functions and that preclude arbitrage .this allows us to obtain ( i ) the exact set of admissible functions in the symmetric ( ) svi case , and ( ii ) a constraint - free parameterisation of gatheral - jacquier functions satisfying the conditions of . in passing ( section [ sec : nonsmooth ] ), we extend the class of possible functions by allowing for non - smooth implied volatility functions . finally ( section [ sec_example ] ), we exhibit examples of non - svi arbitrage - free implied volatility surfaces .we adopt here a general formulation for the implied volatility parameterisation .while this allows us to determine general classes of arbitrage - free volatility surfaces , it does however make some of the results fairly abstract and not readily tractable .we therefore provide some simpler ( and thereby weaker ) conditions ensuring no - arbitrage .define and assume that is asymptotically linear ( definition [ def : linear ] ) and that .corollary [ cor : easycondition1 ] and proposition [ prop : effboundpsi ] each provide necessary conditions preventing butterfly arbitrage ( definition [ def : butterfly ] ) , namely }\\ \psi(z ) \leq \displaystyle \kappa^2+\frac{2z}{m_\infty } - \kappa \sqrt{\kappa^2+\frac{2z}{m_\infty } } , \qquad & \text{for some } \kappa\geq 0 \text { and large enough [ proposition~\ref{prop : effboundpsi}]}. \end{array}\ ] ] calendar spread arbitrage is easier to prevent and we refer the reader directly to proposition [ prop : first_coupling ] for necessary and sufficient conditions , that are easy to check in practice . +* notations : * we consider here european option prices with maturity and strike , written on an underlying stock . without loss of generality we shall always assume that and that interest rates are null , and hence the log ( forward ) moneyness reads . we denote the black - scholes value for a european call option , for a strike and total variance , and more generally for ( any ) european call prices with strike and maturity . for any , the corresponding implied volatility is denoted by and the total variance is defined by . with a slight abuse of language ( commonly accepted in the finance jargon ), we refer to the two - dimensional map as the ( implied ) volatility surface .finally , for two functions and not null almost everywhere , we say that at whenever .this preliminary section serves several purposes : we first recall the very definition of ` arbitrage freeness ' and its characterisation in terms of implied volatility .we then state and prove a few results ( which are also of independent interest ) related to this notion of arbitrage .we finally quickly review the parameterisation proposed in and introduce an extension , which is our new contribution . as defined in ,absence of static arbitrage corresponds to the existence of a non - negative martingale ( on some probability space ) such that european call options ( written on this martingale ) can be written as risk - neutral expectations of their final payoffs .this rigorous definition is however not easily tractable and often difficult to check .roper proved that , given a ( twice differentiable in strike ) volatility surface , absence of static arbitrage is satisfied as soon as the following three conditions holds : no calendar spread arbitrage , no butterfly arbitrage and the time - zero smile is null everywhere .we now define these terms precisely .[ def : calendar ] a volatility surface is free of calendar spread arbitrage if , for all and .absence of calendar spread arbitrage thus means that the total implied variance is an increasing function .define now the operator acting on functions by for all and .even though the operator does not act on the second component of the function , we keep this notation for clarity . with the usual notations from the black - scholes formula : , this leads to the definition of butterfly arbitrage ( * ? ? ? * definition 2.3 ) : [ def : butterfly0 ] for any , the volatility surface is free of butterfly arbitrage if the corresponding density is non - negative .it is indeed well - known ( under suitable differentiability assumptions ) that the second derivative ( with respect to the strike ) of the call price function gives the density of the stock price ( see for details ) .the following corollary ( * ? ? ?* lemma 2.2 ) provides an easy - to - check ( at least in principle ) condition ensuring no butterfly arbitrage : [ def : butterfly ] a volatility surface is free of butterfly arbitrage if for all and , for all . if represents the ( dupire ) local volatility , the relationship , for all is now standard ( see ) .therefore absence of static arbitrage implies that both the numerator and the denominator are non - negative quantities . the extra condition ( from absence of buttefly arbitrage )is the ` large - moneyness behaviour ' ( lmb ) condition which is equivalent to call option prices tending to zero as the strike tends to ( positive ) infinity , as proved in ( * ? ? ?* theorem 5.3 ) .the following lemma however shows that other asymptotic behaviours of and hold in full generality .this was proved by rogers and tehranchi in a general framework , and we include here a short self - contained proof .[ lem : rt10 ] let be any positive real function . then a. ; b. .the arithmetic mean - geometric inequality reads , when , which implies ( i ) , and ( ii ) follows using , when .the missing statements in lemma [ lem : rt10 ] are the lmb condition and the small - moneyness behaviour ( smb ) : . to investigate further , let us remark that the framework developed in encompasses situations where the underlying stock price can be null with positive probability. this can indeed be useful to model the probability of default of the underlyer .computations similar in spirit to show that the marginal law of the stock price at some fixed time has no mass at zero if and only if , which is a statement about a small - moneyness behaviour .this can be fully recast in terms of implied volatility , and the above missing conditions then come naturally into play in the following proposition , the proof of which is postponed to appendix [ sec : prop : nomass ] : [ prop : nomass ] ( symmetry under small - moneyness behaviour ) + let a real function satisfy a. and for all ; b. ( smb condition ) ; c. ( lmb condition ) .define the two functions and by . then 1 . and define two densities of probability measures on with respect to the lebesgue measure , i.e. ; 2 . , so that ; 3 . is the density of probability associated to call option prices with implied volatility , in the sense that , and is the density of probability associated to call option prices with implied volatility .the strict positivity of the function in assumption ( i ) ensures that the support of the underlying distribution is the whole real line .one could bypass this assumption by considering finite support as in . in the latter slightly more general case , the statements and proofs would be very analogous but much more notationally inconvenient .symmetry properties of the implied volatility have been investigated in the literature , and we refer the interested reader to this proposition has been intentionally stated in a maturity - free way : it is indeed a purely ` marginal ' or cross - sectional statement , which does not depend on time .a natural question arises then : can such a function , satisfying the assumptions of proposition [ prop : nomass ] , represent the total implied variance smile at time associated to some martingale ( issued from at time ) ?the answer is indeed positive and this can be proved as follows .consider the natural filtration of a standard ( one - dimensional ) brownian motion , .let be the cumulated distribution function associated to characterised in proposition [ prop : nomass ] , and let be the gaussian cumulative distribution function. then the random variable has law , and .set now , then is a martingale issued from .note that is even a brownian martingale and therefore a continuous martingale .the associated call option prices ] . for any fixed , the function appearing on the right - hand side of proposition [ prop : nocsastrictly ] ( ii )simplifies to given in .in particular and . for any , we have where .when , is concave on with , and hence the map is decreasing on and its infimum is equal to . for , the strict convexity of and the inequality implies that the equation has a unique solution in , which in fact is equal to given in. then the map is decreasing on and increasing on .its infimum is attained at and is equal to defined in .[ rem : comparison ] in , the authors prove that the two conditions ( altogether ) and ( for all ) are sufficient to prevent butterfly arbitrage in the uncorrelated ( ) case .these two conditions can be combined to obtain .a tedious yet straightforward computation shows that is increasing on and maps this interval to .notwithstanding the fact that our condition is necessary and sufficient , it is then clear that a. for , it is also weaker than the one in whenever ; b. for ( which accounts for most practically relevant cases ) it is weaker whenever . in particular , item ( ii ) could be used as a sufficient and necessary lower bound condition ( depending on ) for the function on .the formulation of arbitrage freeness in ( * ? ? ?* theorem 2.1 ) is minimal in the sense that the regularity conditions on the call option prices are necessary and sufficient : to be convex in the strike direction and non - decreasing in the maturity direction . the implied volatility formulation ( * ? ? ?* theorem 2.9 , condition iv.1 ) however , assumes that the total variance is twice differentiable in the strike direction .this regularity is certainly not required ; in fact , the author ( * ? ? ?* theorem 2.9 ) proves the latter by checking the necessary assumptions of theorem 2.1 on the properties of the call option prices defined by the formula more precisely , roper uses the regularity assumption in of in order to define pointwise the second derivative of with respect to the strike .he then proves that the latter is positive , henceforth obtaining the convexity of the price with respect to the strike ( assumption a.1 of theorem 2.1 ) .it turns out that the same result can be obtained without this regularity assumption : assume that for any , the function is continuous and almost everywhere differentiable .since is defined almost everywhere , and since the term in is linear , can be defined as a distribution and we replace the assumption everywhere by in the distribution sense . in order to do so on us from dealing with the boundary behaviour at the origin we can assume additionally the smb condition ( proposition [ prop : nomass ] ( ii ) ) and work on .to conclude , we need to prove that the call options defined in are convex function in . since a continuous function is convex if and only if its second derivative as a distribution is a a positive distribution , the only remaining point to check is that the second derivative as a distribution of is .but this the same computation already carried out in , and the result hence follows .let us finally note that our assumptions on are indeed minimal : conversely , if we start from option prices convex in , their first derivative are defined almost everywhere , and so are those of ( in or ) since the black - scholes mapping in total variance is smooth .assumption [ assu : svi ] imposes some ( mild yet sometimes unrealistic ) conditions on the volatility surface . it turns out that our results are still valid under weaker conditions on the function .recall first the following definition : [ def ] a continuous function is said to be of class on some interval if it is piecewise on , in the sense that for some and , on , a_{i+1}[ ] , , not constant .let us denote the ( possibly empty ) set of discontinuity of . under our assumption in the distribution senseis pointwise defined ( and continuous ) in , and the jump formula for distributions tells us that it is given by a dirac mass at each point of discontinuity .we extend the results of section [ sec_butter ] in the following way . forany define the set [ prop : weak ] * second coupling condition , general formulation * the surface given in is free of butterfly arbitrage if and only if ( with the convention ) and the jumps of are non - negative . as in the proof of [ prop : coupling2 ] , in the distribution sense , the equality holds pointwise outside a , and the first part of the proposition follows .the remaining part of the distribution is the sum of the disjoint dirac masses . by localisationit is clear that the distribution is positive if and only if its continuous part on is positive and each of its point mass distribution is positive . since is non - negative if and only if has a non - negative jump at , the rest of the proposition follows . from the previous proposition ,the analogous of proposition [ prop : nocsastrictly ] holds , with [ prop : nocsastrictlyweak ] assume is asymptotically linear and there is no calendar spread arbitrage .then is neither empty nor bounded from above .moreover , there is no butterfly arbitrage if and only if the following two conditions hold : a. b. for any , and the jumps of are non - negative .in order to find examples of pairs , with different from the svi parameterisation , observe first that the second coupling condition ( proposition [ prop : coupling2 ] ) is more geared towards finding out given than the other way round .we first start with a partial result ( proved in appendix [ sec : prop : effboundpsi ] ) in the other direction , assuming that is asymptotically linear .[ prop : effboundpsi ] if the generalised svi surface is free of static arbitrage and is asymptotically linear and , then there exist and such that for all the following upper bound holds ( with defined in ) using this proposition , we now move on to specific examples of non - svi families .we now provide a triplet , different from the svi form , which characterises an arbitrage - free volatility surface via .define the function by and the functions and by a few remarks are in order : * the function is continuous on ; * ; * the map is increasing and its limit is ; * the function inspired from the computations in proposition [ prop : effboundpsi]is symmetric and continuous on .it is also on , and asymptotically linear .its derivative is therefore piecewise , moreover it has a positive jump at the origin , so we are in the framework of propositions [ prop : weak ] and [ prop : nocsastrictlyweak ] . with these functions ,the total implied variance reads and the following proposition ( proved in appendix [ sec : prop : nonsvi1 ] ) is the main result here : [ prop : nonsvi1 ] the surface is free of static arbitrage . we propose a new triplet characterising an arbitrage - free volatility surface via .define the function by and where is a real number in and with .note that when , modulo a constant , the function corresponds to svi .we could in principle let depend on .the reason for the construction above is that we want to show that the corresponding implied volatility surface is free of static arbitrage for all .the same remarks as in the example in section [ sec : nonsvi ] hold : is continuous on , , is increasing to and is symmetric and continuous on .it is also on , on , and asymptotically linear .the derivative has a positive jump at 0 , so that we are back in the framework of propositions [ eq : setzbarpweak ] and [ prop : nocsastrictlyweak ] . with these functions ,the total implied variance reads and we can check all the conditions preventing arbitrage : ( the proof is postponed to appendix [ sec : prop : nonsvi2 ] [ prop : nonsvi2 ] the surface is free of static arbitrage .the functions and are clearly well - defined and non - negative .consider first .it is readily seen that the function is a primitive of .we now proceed to prove that is indeed a density .let denote the cumulative distribution function of the standard gaussian distribution .an explicit computation yields ( the reverse one can be found in ( * ? ? ?* lemma 2.2 ) ) where and their derivatives are evaluated at , and where we have used the equality . evaluating the right - hand side at , and using the fact that , we obtain if then where we have used the smb condition in assumption ( ii ) .let us first deal with the case when tends to ( positive ) infinity .from lemma [ lem : rt10 ] , tends to zero .the key point is that is the primitive of a non - negative function , therefore it is non - decreasing and has a ( generalised ) limit ] and ] .the two conditions in proposition [ prop : nocsastrictly ] read from the proof of proposition [ prop : nocsastrictly ] , we know that when , the first condition simplifies to }\left|\frac{4}{\psi_\nu'(z)}-\frac{2z}{\psi_\nu(z)}\right|.\ ] ] now , immediate computations yield which , as a function of is defined on , is strictly increasing on and strictly decreasing on . therefore , is infimum over the interval ] .define the function by .there exists a unique such that is strictly increasing on and strictly decreasing on with .setting , the inequality is clearly satisfied for any and all . to conclude , note that for , the second derivative has a mass at the origin , but is convex which implies that this mass is positive and that in the distributional sense following section [ sec : nonsmooth ] . therefore the implied volatility surface is free of static arbitrage and the proposition follows .the function in therefore reads , and is decreasing from to . regarding the function , it is clearly continuous , increasing from to and with . by proposition [ prop : first_coupling ] ,straightforward computations then show that the volatility surface is free of calendar - spread arbitrage . now , for any , we have which is a decreasing function of with limit equal to zero. therefore , and for any , .let us check that the generalised svi surface parameterised by the previous triplet satisfies as a distribution .indeed we only checked that as a function defined everywhere except at the origin ( where as usual in distribution notations is a function defined where is defined ) . here ( where stands for the dirac mass at the origin ) , so that , which is positive since . finally , decreases to .since , the condition is sufficient to prevent butterfly arbitrage .in section [ sec : arbitrage ] we stressed that , following roper or the variant in proposition [ prop : nomass ] , the positivity of the operator in guarantees the existence of a martingale explaining market prices .as a consequence , the celebrated moment formula holds : [ thm : leemoment ] let represent the stock price at time , assumed to be a non - negative random variable with positive and finite expectation .let and .then $ ] and .condition ( 1 ) implies condition ( i ) in proposition [ prop : nomass ] , and conditions ( 2 ) and ( 3 ) imply the smb and lmb limits ( ii ) and ( iii ) in proposition [ prop : nomass ] .therefore by proposition [ prop : nomass ] , the centred probability density is well defined on and , for any , we have , where as tends to infinity , straightforward computations show that since is a second - order strictly convex polynomial with , the function is integrable as long as , i.e. , or .in other words , we have proved that .p. carr and l. wu. a new simple approach for for constructing implied volatility surfaces .preprint available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1701685[papers.ssrn.com/sol3/papers.cfm?abstract_id=1701685 ] , 2010 .a. gulisashvili and e. stein .asymptotic behavior of the stock price distribution density and implied volatility in stochastic volatility models . _ applied mathematics & optimization _ , 61 ( 3):287 - 315 , 2008 .m. roper .arbitrage free implied volatility surfaces .working paper , the university of sidney , available at http://www.maths.usyd.edu.au/u/pubs/publist/preprints/2010/roper-9.pdf[www.maths.usyd.edu.au/u/pubs/publist/preprints/2010/roper-9.pdf ] , 2010 .
|
in this article we propose a generalisation of the recent work of gatheral - jacquier on explicit arbitrage - free parameterisations of implied volatility surfaces . we also discuss extensively the notion of arbitrage freeness and roger lee s moment formula using the recent analysis by roper . we further exhibit an arbitrage - free volatility surface different from gatheral s svi parameterisation .
|
this article considers questions from bayesian statistics in an infinite dimensional setting , for example in function spaces .we assume our state space to be a general separable banach space . while in the finite - dimensional setting , the prior and posterior distribution of such statistical problems can typically be described by densities w.r.t .the lebesgue measure , such a characterisation is no longer possible in the infinite dimensional spaces we consider here : it can be shown that no analogue of the lebesgue measure exists in infinite dimensional spaces .one way to work around this technical problem is to replace lebesgue measure with a gaussian measure on , _i.e. _ with a borel probability measure on such that all finite - dimensional marginals of are ( possibly degenerate ) normal distributions . using a fixed , centred ( mean - zero ) gaussian measure as a reference measure, we then assume that the distribution of interest , , has a density with respect to : measures of this form arise naturally in a number of applications , including the theory of conditioned diffusions and the bayesian approach to inverse problems . in these settingsthere are many applications where is a locally lipschitz continuous function and it is in this setting that we work .our interest is in defining the concept of `` most likely '' functions with respect to the measure , and in particular the _ maximum a posteriori _ estimator in the bayesian context. we will refer to such functions as map estimators throughout .we will define the concept precisely and link it to a problem in the calculus of variations , study posterior consistency of the map estimator in the bayesian setting , and compute it for a number of illustrative applications . to motivate the form of map estimators considered here we consider the case where is finite dimensional and the prior is gaussian .this prior has density with respect to the lebesgue measure where denotes the euclidean norm .the probability density for with respect to the lebesgue measure , given by , is maximised at minimisers of where .we would like to derive such a result in the infinite dimensional setting . the natural way to talk about map estimators in the infinite dimensionalsetting is to seek the centre of a small ball with maximal probability , and then study the limit of this centre as the radius of the ball shrinks to zero . to this end , let be the open ball of radius centred at . if there is a functional , defined on , which satisfies then is termed the _ onsager - machlup _ functional . for any fixed , the function for which the above limit ismaximal is a natural candidate for the map estimator of and is clearly given by minimisers of the onsager - machlup function . in the finite dimensional caseit is clear that given by is the onsager - machlup functional .from the theory of infinite dimensional gaussian measures it is known that copies of the gaussian measure shifted by are absolutely continuous w.r.t . itself , if and only if lies in the cameron - martin space ; furthermore , if the shift direction is in , then shifted measure has density in the finite dimensional example , above , the cameron - martin norm of the gaussian measure is the norm and it is easy to verify that holds for all . in the infinite dimensional case ,it is important to keep in mind that only holds for .similarly , the relation only holds for . in our application ,the cameron - martin formula is used to bound the probability of the shifted ball from equation .( for an exposition of the standard results about small ball probabilities for gaussian measures we refer to ; see also for related material . )the main technical difficulty that is encountered stems from the fact that the cameron - martin space , while being dense in , has measure zero with respect to .an example where this problem can be explicitly seen is the case where is the wiener measure on ; in this example corresponds to a subset of the sobolov space , which has indeed measure zero w.r.t .wiener measure .our theoretical results assert that despite these technical complications the situation from the finite - dimensional example , above , carry over to the infinite dimensional case essentially without change . in theorem [ t :om ] we show that the onsager - machlup functional in the infinite dimensional setting still has the form , where is now the cameron - martin norm associated to ( using for ) , and in corollary [ c : mapmin ] we show that the map estimators for lie in the cameron - martin space and coincide with the minimisers of the onsager - machlup functional . in the second part of the paper , we consider the inverse problem of estimating an unknown function in a banach space , from a given observation , where here is a possibly nonlinear operator , and is a realization of an -valued centred gaussian random variable with known covariance . a prior probability measure put on , and the distribution of is given by , with assumed independent of . under appropriate conditions on and ,bayes theorem is interpreted as giving the following formula for the radon - nikodym derivative of the posterior distribution on with respect to : where derivation of bayes formula for problems with finite dimensional data , and in this form , is discussed in . clearly , then, bayesian inverse problems with gaussian priors fall into the class of problems studied in this paper , for potentials given by which depend on the observed data . when the probability measure arises from the bayesian formulation of inverse problems , it is natural to ask whether the map estimator is close to the truth underlying the data , in either the small noise or large sample size limits .this is a form of bayesian posterior consistency , here defined in terms of the map estimator only .we will study this question for finite observations of a nonlinear forward model , subject to gaussian additive noise .the paper is organized as follows : * in section [ s : bayes ] we detail our assumptions on and ; * in section [ s : map ] we give conditions for the existence of an onsager - machlup functional and show that the map estimator is well - defined as the minimiser of this functional ; * in section [ s : consistency ] we study the problem of bayesian posterior consistency by studying limits of onsager - machlup minimisers in the small noise and large sample size limits ; * in section [ sec : fm ] we study applications arising from data assimilation for the navier - stokes equation , as a model for what is done in weather prediction ; * in section [ sec : cd ] we study applications arising in the theory of conditioned diffusions .we conclude the introduction with a brief literature review .we first note that map estimators are widely used in practice in the infinite dimensional context .we also note that the functional in resembles a tikhonov - phillips regularization of the minimisation problem for , with the cameron - martin norm of the prior determining the regularization . in the theory of classical non - statistical inversion , formulation viatikhonov - phillips regularization leads to an infinite dimensional optimization problem and has led to deeper understanding and improved algorithms .our aim is to achieve the same in a probabilistic context .one way of defining a map estimator for given by is to consider the limit of parametric map estimators : first discretize the function space using parameters , and then apply the finite dimensional argument above to identify an onsager - machlup functional on . passing to the limit in the functional provides a candidate for the limiting onsager - machlup functional .this approach is taken in for problems arising in conditioned diffusions . unfortunately , however, it does not necessarily lead to the correct identification of the onsager - machlup functional as defined by .the reason for this is that the space on which the onsager - mahlup functional is defined is smoother than the space on which small ball probabilities are defined .small ball probabilities are needed to properly define the onsager - machlup functional in the infinite dimensional limit .this means that discretization and use of standard numerical analysis limit theorems can , if incorrectly applied , use more regularity than is admissible in identifying the limiting onsager - mahlup functional .we study the problem directly in the infinite dimensional setting , without using discretization , leading , we believe , to greater clarity . adopting the infinite dimensional perspective for map estimation has been widely studied for diffusion processes and related stochastic pdes ; see for an overview .our general setting is similar to that used to study the specific applications arising in the papers . by working with small ball properties of gaussian measures , and assuming that has natural continuity properties , we are able to derive results in considerable generality .there is a recent related definition of map estimators in , with application to density estimation in . however , whilst the goal of minimising is also identified in , the proof in that paper is only valid in finite dimensions since it implicitly assumes that the cameron - martin norm is .s .finite . in our specific application to fluid mechanicsour analysis demonstrates that widely used _ variational methods _ may be interpreted as map estimators for an appropriate bayesian inverse problem and , in particular , that this interpretation , which is understood in the atmospheric sciences community in the finite dimensional context , is well - defined in the limit of infinite spatial resolution .posterior consistency in bayesian nonparametric statistics has a long history .the study of posterior consistency for the bayesian approach to inverse problems is starting to receive considerable attention .the papers are devoted to obtaining rates of convergence for linear inverse problems with conjugate gaussian priors , whilst the papers study non - conjugate priors for linear inverse problems .our analysis of posterior consistency concerns nonlinear problems , and finite data sets , so that multiple solutions are possible .we prove an appropriate weak form of posterior consistency , without rates , building on ideas appearing in .our form of posterior consistency is weaker than the general form of bayesian posterior consistency since it does not concern fluctuations in the posterior , simply a point ( map ) estimator .however we note that for linear gaussian problems there are examples where the conditions which ensure convergence of the posterior mean ( which coincides with the map estimator in the linear gaussian case ) also ensure posterior contraction of the entire measure .throughout this paper we assume that is a separable banach space and that is a centred gaussian ( probability ) measure on with cameron - martin space .the measure of interest is given by and we make the following assumptions concerning the _ potential _ . [ a : asp1 ] the function satisfies the following conditions : * for every there is an , such that for all , * is locally bounded from above , _i.e. _ for every there exists such that , for all with we have * is locally lipschitz continuous , _i.e. _ for every there exists such that for all with we have assumption [ a : asp1](i ) ensures that the expression for the measure is indeed normalizable to give a probability measure ; the specific form of the lower bound is designed to ensure that application of the fernique theorem ( see or ) proves that the required normalization constant is finite .assumption [ a : asp1](ii ) enables us to get explicit bounds from below on small ball probabilities and assumption [ a : asp1](iii ) allows us to use continuity to control the onsager - machlup functional .numerous examples satisfying these condition are given in the references .finally , we define a function by we will see in section [ s : map ] that is the onsager - machlup functional .[ r : mean ] we close with a brief remark concerning the definition of the onsager - machlup function in the case of non - centred reference measure . shifting coordinates by it is possible to apply the theory based on centred gaussian measure , and then undo the coordinate change .the relevant onsager - machlup functional can then be shown to be this section we prove two main results . the first , theorem [ t : om ] , establishes that given by is indeed the onsager - machlup functional for the measure given by. then theorem [ t : map ] and corollary [ c : mapmin ] , show that the map estimators , defined precisely in definition [ d : map ] , are characterised by the minimisers of the onsager - machlup functional . for ,let be the open ball centred at with radius in .let be the mass of the ball .we first define the map estimator for as follows : [ d : map ] let any point satisfying , is a map estimator for the measure given by .we show later on ( theorem [ t : map ] ) that a strongly convergent subsequence of exists and its limit , that we prove to be in , is a map estimator and also minimises the onsager - machlup functional .corollary [ c : mapmin ] then shows that any map estimator as given in definition [ d : map ] lives in as well , and minimisers of characterise all map estimators of .one special case where it is easy to see that the map estimator is unique is the case where is linear , but we note that , in general , the map estimator can not be expected to be unique . to achieve uniqueness , stronger conditions on would be required .we first need to show that is the onsager - machlup functional for our problem : [ t : om ] let assumption [ a : asp1 ] hold .then the function defined by is the onsager - machlup functional for , _i.e. _ for any have note that is finite and positive for any by assumptions [ a : asp1](i),(ii ) together with the fernique theorem and the positive mass of all balls in , centred at points in , under gaussian measure .the key estimate in the proof is the following consequence of proposition 3 in section 18 of : this is the key estimate in the proof since it transfers questions about probability , naturally asked on the space of full measure under , into statements concerning the cameron - martin norm of , which is almost surely infinite under .we have by assumption [ a : asp1 ] ( iii ) , for any where with . therefore ,setting and , we can write now , by , we have with as .thus similarly we obtain with as and deduce that inequalities and give the desired result .we note that similar methods of analysis show the following : let the assumptions of theorem [ t : om ] hold . then for any .noting that we consider to be a probability measure and hence with , arguing along the lines of the proof of the above theorem gives with ( where is as in definition [ a : asp1 ] ) and as .the result then follows by taking and as .[ p : prop ] suppose assumptions [ a : asp1 ] hold. then the minimum of is attained for some element .the existence of a minimiser of in , under the given assumptions , is proved as theorem 5.4 in ( and as theorem 2.7 in in the case that is non - negative ) .the rest of this section is devoted to a proof of the result that map estimators can be characterised as minimisers of the onsager - machlup functional ( theorem [ t : map ] and corollary [ c : mapmin ] ) .[ t : map ] suppose that assumptions [ a : asp1 ] ( ii ) and ( iii ) hold .assume also that there exists an such that for any .* let .there is a and a subsequence of which converges to strongly in .* the limit is a map estimator and a minimiser of .the proof of this theorem is based on several lemmas .we state and prove these lemmas first and defer the proof of theorem [ t : map ] to the end of the section where we also state and prove a corollary characterising the map estimators as minimisers of onsager - machlup functional .[ l : limg0 ] let . for any centred gaussian measure on a separable banach space have where and is a constant independent of and .we first show that this is true for a centred gaussian measure on with the covariance matrix ] ( noting that ) . by anderson s inequality for the infinite dimensional spaces ( see theorem 2.8.10 of ) we have and therefore and since is arbitrarily small the result follows for the finite - dimensional case . to show the result for an infinite dimensional separable banach space , we first note that , the orthogonal basis in the cameron - martin space of for , separates the points in , therefore is an injective map from into .let and then , since is a radon measure , for the balls and , for any , there exists large enough such that the cylindrical sets and satisfy and for , where denotes the symmetric difference .let and and for , large enough so that .with we have since and converge to zero as , the result follows .[ l : limg0-ne ] suppose that , and converges weakly to in as .then for any there exists small enough such that let be the covariance operator of , and the eigenfunctions of scaled with respect to the inner product of , the cameron - martin space of , so that forms an orthonormal basis in .let be the corresponding eigenvalues and .since converges weakly to in as , and as , for any , there exists sufficiently large and sufficiently small such that where . by ( [ e : wcx ] ) , for small enough we have and therefore let map to , and consider to be defined as in ( [ e : defj0n ] ) .having ( [ e : la ] ) , and choosing such that , for any we can write as was arbitrary , the constant in the last line of the above equation can be made arbitrarily small , by making sufficiently small and sufficiently large . having this and arguing in a similar way to the final paragraph of proof of lemma [ l : limg0 ] , the result follows .[ c : limg0-ne ] suppose that .then [ l : limg0-wc ] consider and suppose that converges weakly and not strongly to in as .then for any there exists small enough such that since converges weakly and not strongly to , we have and therefore for small enough there exists such that for any .let , and , , be defined as in the proof of lemma [ l : limg0-ne ] . since as , also , as for -almost every , and is an orthonormal basis in ( closure of in ) , we have now , for any , let large enough such that .then , having ( [ e : wcx0 ] ) and ( [ e : l2b ] ) , one can choose small enough and large enough so that for and therefore , letting and be defined as in the proof of lemma [ l : limg0-ne ] , we can write if is small enough so that . having this and arguing in a similar way to the final paragraph of proof of lemma [ l : limg0 ] , the result follows . having these preparations in place , we can give the proof of theorem [ t : map ] .( _ of theorem [ t : map ] _ )i ) we first show is bounded in . by assumption [ a : asp1].(ii ) for any thereexists such that for any satisfying ; thus may be assumed to be a non - decreasing function of .this implies that we assume that and then the inequality above shows that noting that is independent of .we also can write which implies that for any and now suppose is not bounded in , so that for any there exists such that ( with as ) . by ( [ e : j0 ] ) ,( [ e : zx1 ] ) and definition of we have implying that for any and corresponding this contradicts the result of lemma [ l : limg0 ] ( below ) for small enough .hence there exists such that therefore there exists a and a subsequence of which converges weakly in to as . now , suppose either * there is no strongly convergent subsequence of in , or * if there is one , its limit is not in .let .each of the above situations imply that for any positive , there is a such that for any , we first show that , has to be in . by definition of have ( for ) supposing , in lemma [ l : limg0-ne ] we show that for any there exists small enough such that hence choosing in ( [ e : fringe ] ) such that , and setting , from ( [ e : zd - contra ] ) , we get which is a contradiction .we therefore have .now , knowing that , we can show that the converges strongly in .suppose not .then for the hypotheses of lemma [ l : limg0-wc ] are satisfied .again choosing in ( [ e : fringe ] ) such that , and setting , from lemma [ l : limg0-wc ] and ( [ e : zd - contra ] ) , we get which is a contradiction .hence there is a subsequence of converging strongly in to .\ii ) let ; existence is assured by theorem [ t : om ] . by assumption [ a : asp1 ] ( iii )we have with and . therefore , since is continuous on and in x , suppose is not bounded in or if it is , it only converges weakly ( and not strongly ) in .then and hence for small enough , .therefore for the centered gaussian measure , since we have this , since by definition of , and hence implies that in the case where converges strongly to in , by the cameron - martin theorem we have and then by an argument very similar to the proof of theorem 18.3 of one can show that and ( [ e : bzsup ] ) follows again in a similar way .therefore is a map estimator of measure .it remains to show that is a minimiser of .suppose is not a minimiser of so that .let be small enough so that in the equation before ( [ e : jlims ] ) for any and therefore let .we have and this by ( [ e : bzmin ] ) and ( [ e : bzsup ] ) implies that which is a contradiction , since by definition of , for any .[ c : mapmin ] under the conditions of theorem [ t : map ] we have the following : * any map estimator , given by definition [ d : map ] , minimises the onsager - machlup functional . * any which minimises the onsager - machlup functional , is a map estimator for measure given by .* let be a map estimator .by theorem [ t : map ] we know that has a subsequence which strongly converges in to .let be the said subsequence .then by ( [ e : bzsup ] ) one can show that by the above equation and since is a map estimator , we can write then corollary [ c : limg0-ne ] implies that , and supposing that is not a minimiser of would result in a contradiction using an argument similar to last paragraph of the proof of the above theorem . *note that the assumptions of theorem [ t : map ] imply those of theorem [ t : om ] .since is a minimiser of as well , by theorem [ t : om ] we have then we can write the result follows by definition [ d : map ] .the structure , where is gaussian , arises in the application of the bayesian methodology to the solution of inverse problems . in that contextit is interesting to study _ posterior consistency _ : the idea that the posterior concentrates near the truth which gave rise to the data , in the small noise or large sample size limits ; these two limits are intimately related and indeed there are theorems that quantify this connection for certain linear inverse problems . in this sectionwe describe the bayesian approach to nonlinear inverse problems , as outlined in the introduction .we assume that the data is found from application of to the truth with additional noise : the posterior distribution is then of the form and in this case it is convenient to extend the onsager - machlup functional to a mapping from to , defined as we study posterior consistency of map estimators in both the small noise and large sample size limits .the corresponding results are presented in theorems [ t : j_n ] and [ t : j ] , respectively .specifically we characterize the sense in which the map estimators concentrate on the truth underlying the data in the small noise and large sample size limits .let us denote the exact solution by and suppose that as data we have the following random vectors with and independent identically distributed random variables .thus , in the general setting , we have , and a block diagonal matrix with in each block .we have independent observations each polluted by noise , and we study the limit . corresponding to this set of data andgiven the prior measure we have the following formula for the posterior measure on : here , and in the following , we use the notation , and : by corollary [ c : mapmin ] map estimators for this problem are minimisers of our interest is in studying properties of the limits of minimisers of , namely the map estimators corresponding to the preceding family of posterior measures .we have the following theorem concerning the behaviour of when .[ t : j ] assume that is lipschitz on bounded sets and .for every , let be a minimiser of given by .then there exists a and a subsequence of that converges weakly to in , almost surely . for any such we have .we describe some preliminary calculations useful in the proof of this theorem , then give lemma [ l : j_n ] , also useful in the proof , and finally give the proof itself .we first observe that , under the assumption that is lipschitz on bounded sets , assumptions [ a : asp1 ] hold for .we note that hence define we have [ l : j_n ] assume that is lipschitz on bounded sets .then for fixed and almost surely , there exists such that we first observe that , under the assumption that is lipschitz on bounded sets and because for a given and fixed realisations there exists an such that , assumptions [ a : asp1 ] hold for . since the result follows by proposition [ p : prop ] . we may now prove the posterior consistency theorem . from onwardsthe proof is an adaptation of the proof of theorem 2 of .we note that , the assumptions on limiting behaviour of measurement noise in are stronger : property ( 9 ) of is not assumed here for our . on the other hand a frequentist approach is used in , while here since is coming from a bayesian approach , the norm in the regularisation term is stronger ( it is related to the cameron - martin space of the gaussian prior ) .that is why in our case asking what if is not in and only in , is relevant and is answered in corollary [ c : utrx ] below . _( of theorem [ t : j ] ) _ by definition of we have therefore using young s inequality ( see lemma 1.8 of , for example ) for the last term in the right - hand side we get taking expectation and noting that the are independent , we obtain where .this implies that and 1 ) we first show using that there exist and a subsequence of such that let be a complete orthonormal system for .then therefore there exists and a subsequence of , such that . now considering and using the same argumentwe conclude that there exists and a subsequence of such that . continuing similarlywe can show that there exist and such that for any and as . therefore we need to show that .we have , for any , therefore and .we can now write for any nonzero now for any fixed we choose large enough so that and then large enough so that this demonstrates that as .\2 ) now we show almost sure existence of a convergent subsequence of . by wehave in probability as .therefore there exists a subsequence of such that now by we have in probability as and hence there exists a subsequence of such that converges weakly to in almost surely as .since is compactly embedded in , this implies that in almost surely as .the result now follows by continuity of . in the casethat ( and not necessarily in ) , we have the following weaker result : [ c : utrx ] suppose that and satisfy the assumptions of theorem [ t : j ] , and that .then there exists a subsequence of converging to almost surely . for any , by density of in , there exists such that .then by definition of we can write therefore , dropping in the left - hand side , and using young s inequality we get by local lipschitz continuity of , , and therefore taking the expectations and noting the independence of we get implying that since the is obviously positive and was arbitrary , we have .this implies that in probability .therefore there exists a subsequence of which converges to almost surely .consider the case where as data we have the random vector for and with again as the true solution and , , gaussian random vectors in .thus , in the preceding general setting , we have and . rather than having independent observations , we have an observation noise scaled by small converging to zero . for this data andgiven the prior measure on , we have the following formula for the posterior measure : by the result of the previous section , the map estimators for the above measure are the minimisers of our interest is in studying properties of the limits of minimisers of as .we have the following almost sure convergence result .[ t : j_n ] assume that is lipschitz on bounded sets , and .for every , let be a minimiser of given by .then there exists a and a subsequence of that converges weakly to in , almost surely . for any such we have .the proof is very similar to that of theorem [ t : j ] and so we only sketch differences .we have letting we hence have .for this the result of lemma [ l : j_n ] holds true , using an argument similar to the large sample size case .the result of theorem [ t : j_n ] carries over as well .indeed , by definition of , we have therefore using young s inequality for the last term in the right - hand side we get taking expectation we obtain this implies that and having ( [ e : eglim0 ] ) and ( [ e : eun_bd0 ] ) , and with the same argument as the proof of theorem [ t : j ] , it follows that there exists a and a subsequence of that converges weakly to in almost surely , and for any such we have . as in the large sample size case , here also if we have and we do not restrict the true solution to be in the cameron - martin space , one can prove , in a similar way to the argument of the proof of corollary [ c : utrx ] , the following weaker convergence result : [ c : utrx0 ] suppose that and satisfy the assumptions of theorem [ t : j_n ] , and that . then there exists a subsequence of converging to almost surely .in this section we present an application of the methods presented above to filtering and smoothing in fluid dynamics , which is relevant to data assimilation applications in oceanography and meteorology .we link the map estimators introduced in this paper to the variational methods used in applications , and we demonstrate posterior consistency in this context .we consider the 2d navier - stokes equation on the torus with periodic boundary conditions : here is a time - dependent vector field representing the velocity , is a time - dependent scalar field representing the pressure , is a vector field representing the forcing ( which we assume to be time - independent for simplicity ) , and is the viscosity .we are interested in the inverse problem of determining the initial velocity field from pointwise measurements of the velocity field at later times .this is a model for the situation in weather forecasting where observations of the atmosphere are used to improve the initial condition used for forecasting . for simplicitywe assume that the initial velocity field is divergence - free and integrates to zero over , noting that this property will be preserved in time .define and as the closure of with respect to the norm .we define the map to be the leray - helmholtz orthogonal projector ( see ) .given , define . then an orthonormal basis for given by , where for .thus for we may write where , since is a real - valued function , we have the reality constraint . using the fourier decomposition of , we define the fractional sobolev spaces with the norm , where . if , the stokes operator , then .we assume that for some .let , for , and define be the set of pointwise values of the velocity field given by where is some finite set of point in with cardinality .note that each depends on and we may define by .we let be a set of random variables in which perturbs the points to generate the observations in given by we let , the accumulated data up to time , with similar notation for , and define by .we now solve the inverse problem of finding from .we assume that the prior distribution on is a gaussian , with the property that and that the observational noise is i.i.d . in ,independent of , with distributed according to a gaussian measure .if we define then under the preceding assumptions the bayesian inverse problem for the posterior measure for is well - defined and is lipschitz in with respect to the hellinger metric ( see ) .the onsager - machlup functional in this case is given by we are in the setting of subsection [ ssec : sn ] , with and . in the applied literatureapproaches to assimilating data into mathematical models based on minimising are known as _ variational methods _ , and sometimes as 4dvar .illustration of posterior consistency in the fluid mechanics application .the three curves given are the relative error of the map estimator in reproducing the truth , ( solid ) , the relative error of the map in reproducing ( dashed ) , and the relative error of with respect to the observations ( dash - dotted).,scaledwidth=80.0% ] we now describe numerical experiments concerned with studying posterior consistency in the case .we let noting that if , then almost surely for all ; in particular . thus as required .the forcing in is taken to be , where and with the canonical skew - symmetric matrix , and .the dimension of the attractor is determined by the viscosity parameter . for the particular forcing usedthere is an explicit steady state for all and for this solution is stable ( see , chapter 2 for details ) . as decreases the flow becomes increasingly complex and we focus subsequent studies of the inverse problem on the mildly chaotic regime which arises for .we use a time - step of .the data is generated by computing a true signal solving the navier - stokes equation at the desired value of , and then adding gaussian random noise to it at each observation time .furthermore , we let and take , so that .we take spatial observations at each observation time .the observations are made at the gridpoints ; thus the observations include all numerically resolved , and hence observable , wavenumbers in the system .since the noise is added in spectral space in practice , for convenience we define and present results in terms of .the same grid is used for computing the reference solution and for computing the map estimator . figure [ fig : post_cons ] illustrates the posterior consistency which arises as the observational noise strength .the three curves shown quantify : ( i ) the relative error of the map estimator compared with the truth , ; ( ii ) the relative error of compared with ; and ( iii ) the relative error of with respect to the observations .the figure clearly illustrates theorem [ t : j_n ] , via the dashed curve for ( ii ) , and indeed shows that the map estimator itself is converging to the true initial condition , via the solid curve ( i ) , as . recall that the observations approach the true value of the initial condition , mapped forward under , as , and note that the dashed and dashed - dotted curves shows that the image of the map estimator under the forward operator , , is closer to than , asymptotically as this section we consider the map estimator for conditioned diffusions , including bridge diffusions and an application to filtering / smoothing .we identify the onsager - machlup functional governing the map estimator in three different cases .we demonstrate numerically that this functional may have more than one minimiser .furthermore , we illustrate the results of the consistency theory in section [ s : consistency ] using numerical experiments .subsection [ ssec : un ] concerns the unconditioned case , and includes the assumptions made throughout .subsections [ ssec : bd ] and [ ssec : fs ] describe bridge diffusions and the filtering / smoothing problem respectively .finally , subsection [ ssec : ne ] is devoted to numerical experiments for an example in filtering / smoothing . for simplicitywe restrict ourselves to scalar processes with additive noise , taking the form if we let denote the measure on ;{\mathbb{r}}\bigr) ] denote the space of absolutely continuous functions on ] , is the mean of and \bigm| \int_0^t \bigl|v'(s)\bigr|^2\,ds<\infty \mbox { and } v(0)=u^- , u(t)=u^+ \bigr\}.\ ] ] the map estimators for are found by minimising over .we now consider conditioning the measure on observations of the process at discrete time points .assume that we observe given by where and the are independent identically distributed random variables with .let denote the -valued gaussian measure and let denote the -valued gaussian measure where is defined by recall and from the unconditioned case and define the measures and on as follows .the measure is defined to be an independent product of and , whilst . then with constant of proportionality depending only on .clearly , by continuity , and hence applying the conditioning lemma 5.3 in then gives thus we define the cameron - martin space is again and the onsager - machlup functional is thus , given by the map estimator for this setup is , again , found by minimising the onsager - machlup functional .the only difference between the potentials and , and thus between the functionals for the unconditioned case and for the case with discrete observations , is the presence of the term . in the euler - lagrange equations describing the minima of , this term leads to dirac distributions at the observation points and it transpires that , as a consequence , minimisers of have jumps in their first derivates at .this effect can be clearly seen in the local minima of shown in figure [ fig : smooth - minima ] .illustration of the problem of local minima of for the smoothing problem with a small number of observations .the process starts at and moves in a double - well potential with stable equilibrium points at and .two observations of the process are indicated by the two black circles .the curves correspond to four different local minima of the functional for this situation . ] for the experiments we generate a random `` signal '' by numerically solving the sde , using the euler - maruyama method , for a double - well potential given by with diffusion constant and initial value . from the resulting solution we generate random observations using . then we implement the onsager - machlup functional from equation and use numerical minimisation , employing the broyden - fletcher - goldfarb - shanno method ( see ; we use the implementation found in the gnu scientific library ) , to find the minima of .the same grid is used for numerically solving the sde and for approximating the values of .the first experiment concerns the problem of local minima of . for small number of observationswe find multiple local minima ; the minimisation procedure can converge to different _ local _ minima , depending on the starting point of the optimisation .this effect makes it difficult to find the map estimator , which is the _ global _ minimum of , numerically .the problem is illustrated in figure [ fig : smooth - minima ] , which shows four different local minima for the case of observations . in the presence of local minima, some care is needed when numerically computing the map estimator .for example , one could start the minimisation procedure with a collection of different starting points , and take the best of the resulting local minima as the result .one would expect this problem to become less pronounced as the number of observations increases , since the observations will `` pull '' the map estimator towards the correct solution , thus reducing the number of local minima .this effect is confirmed by experiments : for larger numbers of observations our experiments found only one local minimum .illustration of posterior consistency for the smoothing problem in the small - noise limit .the marked points correspond the maximum - norm distance between the true signal and the map estimator with evenly spaced observations .the map is the projection of the path onto the observation points .the solid line is a fitted curve of the form . ]the second experiment concerns posterior consistency of the map estimator in the small noise limit .here we use a fixed number of observations of a fixed path of , but let the variance of the observational noise converge to .noting that the exact path of the sde , denoted by in , has the regularity of a brownian motion and therefore the observed path is not contained in the cameron - martin space , we are in the situation described in corollary [ c : utrx0 ] .our experiments indicate that we have as , where denotes the map estimator corresponding to observational variance , confirming the result of corollary [ c : utrx0 ] .as discussed above , for small values of one would expect the minimum of to be unique and indeed experiments where different starting points of the optimisation procedure were tried did not find different minima for small .the result of a simulation with is shown in figure [ fig : smooth - noise ] .illustration of posterior consistency for the smoothing problem in the large sample size limit .the marked points correspond the supremum - norm distance between the true signal and the map estimator with evenly spaced observations .the solid line give a fitted curve of the form ; the exponent was found numerically .] finally , we perform an experiment to illustrate posterior consistency in the large sample size limit : for this experiment we still use one fixed path of the sde . then , for different values of , we generate observations using at equidistantly spaced times , for fixed , and then determine the distance of the resulting map estimate to the exact path .as discussed above , for large values of one would expect the minimum of to be unique and indeed experiments where different starting points of the optimisation procedure were tried did not find different minima for large .the situation considered here is not covered by the theoretical results from section [ s : consistency ] , but the results of the numerical experiment , shown in figure [ fig : smooth - data ] indicate that posterior consistency still holds .
|
we consider the inverse problem of estimating an unknown function from noisy measurements of a known , possibly nonlinear , map applied to . we adopt a bayesian approach to the problem and work in a setting where the prior measure is specified as a gaussian random field . we work under a natural set of conditions on the likelihood which imply the existence of a well - posed posterior measure , . under these conditions we show that the _ maximum a posteriori _ ( map ) estimator is well - defined as the minimiser of an onsager - machlup functional defined on the cameron - martin space of the prior ; thus we link a problem in probability with a problem in the calculus of variations . we then consider the case where the observational noise vanishes and establish a form of bayesian posterior consistency for the map estimator . we also prove a similar result for the case where the observation of can be repeated as many times as desired with independent identically distributed noise . the theory is illustrated with examples from an inverse problem for the navier - stokes equation , motivated by problems arising in weather forecasting , and from the theory of conditioned diffusions , motivated by problems arising in molecular dynamics .
|
efficient solutions to similarity or proximity search problem have many increasingly important applications in several areas , most notably in ( multi)media and information retrieval . besides the usual database centric model some similaritysearching methods can be seen as nearest neighbor classifiers as well , and have applications as internal tools in many systems ( e.g. for lossy video or audio compression , pattern recognition and clustering , bioinformatics , machine learning , artificial intelligence , data mining ) .metric space is a pair , where is an universe of objects , and is a distance function .the distance function is _ metric _ , if it satisfies for all in the point of view of the applications , we have some subset of objects , , and we are interested in the proximity of the objects towards themselves , or towards some query objects .the most fundamental type of query is _ range query _ : retrieve all objects in the database that are within a certain similarity threshold to the given query object , that is , compute .another common query ( which can be solved with suitably adapted range query as well ) is to retrieve the -nearest neighbors of in .a large number of different data structures and query algorithms have been proposed , see e.g. . in this paperwe take a fresh look on the well - known gnat data structure .while it has some attractive properties , it is often dismissed as having too large memory requirements ( which is partially based on too coarse analysis , as we show ) .we give several techniques to improve the space complexity , make it memory adaptive in a way that is arguably more elegant than in the baseline gnat , improving its search performance on the same time .we also show that it is possible to replace gnat s hyperplane partitioning with ball partitioning , which gives more flexibility in certain situations and can also further improve the performance .it is also possible to increase the tree arity while keeping the same memory usage .recently gnat gained new interest also in the form of egnat , a dynamic and external memory based variant of gnat .many of our techniques can benefit egnat as well , and we discuss some methods that can improve construction and insertion costs in our gnat variant .we conclude with experimental results that show substantial improvements in space usage and query performance .we briefly review the algorithms relevant to the present work .our work is based on gnat , but gnat itself has some connections to aesa ( which we will make more explicit in what follows ) .egnat is a dynamic external memory variant of gnat .we also give a new analysis for gnat in this section .approximating eliminating search algorithm ( aesa ) is one of the most well - known and one of simplest approaches to index a metric space .it is also the best in terms of number of distance evaluations needed to answer range or -nn queries .the drawbacks are its quadratic space requirement and high extra cpu time ( the time spent on other work than pure distance evaluations ) . the data structure is simply a precomputed matrix of all the distances between the objects in .the space complexity is therefore and the matrix is computed with distance computations .this makes the structure highly impractical for large .the range query algorithm is also simple .first the distance between the query and randomly selected pivot is evaluated . if , then is reported .then each object that does not satisfy is eliminated , i.e. we compute a new set .note that the distances can be retrieved from the precomputed matrix .however , the elimination process has to make a linear scan over the set , so the cost is the time for one distance computation plus extra cpu time .this process is repeated with a new pivot taken from the qualifying set .this selection can be random , or e.g. the one that minimizes the lower bound distance to ( which can be maintained during the search with constant factor overhead ) .this is repeated until becomes empty . by experimental results the search algorithm makes only a constant number distance computations on average , which means extra cpu time on average .one should note that the result means that does not affect the number of distance evaluations , while the `` constant '' has exponential dependence on the dimension of the space and the search radius .there are many approaches to reduce the space and/or the extra cpu time ( e.g. ) , but these induce more distance computations or extra cpu time or work only for -nn queries ( ) . geometric near - neighbor access tree ( gnat ) is based on hyperplane partitioning applied recursively to obtain an -ary tree ( where is a constant / parameter ) .the tree is built as follows : 1 .select _ centers _ or pivots ( called split points in ) .2 . associate each object in with the closest center in , obtaining sets .3 . compute a _ distance range table _ for the current node , where ] is empty , we can eliminate ; or in other words , if ] in floating point representation , we actually store .\ ] ] that is , round down and up .one problem with fixed - point representation is that we can not have large magnitude and good precision with a small number of bits , which is a problem if the distance values can be sometimes large and sometimes small .the other problem is more implementation specific , i.e. how to fix ( this could be done dynamically , however ) .one solution that works quite well for a lot of different scenarios is to use some kind of range transform .for example , one could convert into fixed - point representation instead of converting plain . again , notice that this is not a problem as we do no arithmetic in fixed - point representation .however , is suitable only if .better method is to use , for some , as this transforms all positive numbers towards . on the other hand , using very small would mean too much loss of precision . in practice values like workvery well for .the conversion becomes now .\ ] ] to convert a fixed point value back to floating point we do as shown in the experimental results , using just one byte to store the ( continuous ) distance values gives negligible performance loss while reducing the space by a factor of 4 .in some cases using fixed - point instead of floating point actually increases the performance ( i.e. cpu time ) a little , probably due to better cache utilization .another idea is to have smaller range tables by not ( fully ) indexing every center .note that egnat uses just one column of ( corresponding to the nearest neighbor of in ) in each node during searching .this gives the idea of limiting the set of centers where the nearest neighbor can be selected , effectively removing some of the columns from .that is , we can select a subset , and compute sized distance range tables .this does not affect the arity of the tree , just the pruning process , which is trivial to adapt in the case of egnat . for gnat we can replace the aesa - like algorithm e.g. with a laesa - like algorithm .preprocessing time for the tables is also improved . in any case , this can make the search algorithm potentially worse , i.e. it may not prune the tree as effectively now , but in return the tree arity can be larger thanks to the smaller tables .the arities can be increased by a factor , for , if , while keeping the same memory usage for the tables ( per node ) .this again makes the clusters smaller , giving an opportunity to more effective pruning .this method may have a positive effect especially in secondary memory implementation .we have implemented the algorithms in c and ran various experiments with different data sets .we used random vectors in uniformly distributed unitary cube as well as 112 dimensional color histograms , both with euclidean distance , as well as an english dictionary and a larger dictionary ( combined from several languages , duplicates removed ) with edit - distance .the databases are from .in each case we picked 1000 objects randomly and used them as the queries , building the database using the rest . in each case the index is built the whole way down , i.e. no bucketing was used for the leaves .pivots were selected in random in all cases .we call our algorithm gnatty in what follows .we used both hyperplane partitioning ( as in original gnat ) and ( unbalanced ) ball partitioning . for ball - partitioning , the optimal value of ( see sec . [sec : ballp ] ) depends on the dimensionality of the space and the selectiveness of the queries , as well as the arities .in particular , for the `` easy '' cases the optimum is , and it decreases as the queries become `` harder '' .[ fig : gamma ] shows two cases ( random vectors in 15 dimensional space and a string dictionary ) where it is beneficial to use .note that the space complexity is also affected , in particular for non - constant arities and large , which means that in most cases the optimum may be impractical .in general , keeping close to and adjusting gives better control for the space / time trade - offs .in all the subsequent plots we use a fixed , as it usually gives quite a noticable performance boost , while not affecting the space complexity when using non - constant arities too much . , range query retrieves 10 neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as above , but for string dictionary and . ], range query retrieves 10 neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as above , but for string dictionary and . ], range query retrieves 10 neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as above , but for string dictionary and . ], range query retrieves 10 neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as above , but for string dictionary and . ], range query retrieves 10 neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as above , but for string dictionary and . ][ fig : alpha ] shows the effect of for two synthetic vector spaces and for english dictionary .the space is very close to for , but starts to increase rapidly after that .note however that the data itself can take a lot of space ; e.g. vectors in 15 dimensional space ( using one float per coordinate ) requires bytes , which is easily more than what the range tables require for moderate . in any case, if there are available memory , increasing reduces the number of distance evaluations steadily .observe that ball partitioning gives better results than the original hyperplane partitioning , especially for strings ., range query retrieves 10 or 100 nearest neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as the previous two , but for strings . ], range query retrieves 10 or 100 nearest neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as the previous two , but for strings . ], range query retrieves 10 or 100 nearest neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as the previous two , but for strings . ], range query retrieves 10 or 100 nearest neighbors ; 2nd plot : the number of range tables entries corresponding to the previous plot ; 3rd and 4th plots : as the previous two , but for strings . ][ fig : eqmem ] compares gnatty ( using ball partitioning ) against the original gnat ( hyperplane partitioning and constant arity ) and two variants of egnat , so that all methods use the same amount of memory .we also compare against gnatty that uses fixed - point ( fp ) ( see sec . [sec : fp ] ) to store the range tables ( 1 byte per distance ; the baseline method uses 1 float , i.e. 4 bytes ) .recall that egnat ( besides the added dynamism and external memory implementation ) is as gnat with simpler pruning rules .as seen in the plots , this does not work well for large arities ( the original egnat uses relatively low arities ) .hence we added a nearest neighbor ( nn ) index over the pivots so that the nearest pivot ( along with any pivot in the range ) to the query can be retrieved faster .the performance of gnatty fp is close to gnatty , even if the former uses only 1/4th of the space and approximated distance values .we include list of clusters ( lc ) as a baseline competitor .lc uses only space .the bucket size for lc was optimzed for and , for color histograms and strings , respectively . as an other example ,using on the color histograms database , gnat with hyperplane partitioning would need to reach the performance of gnatty with ball partitioning and . on the large string dictionary for ,gnat would need to match gnatty with .note that the constant factor in the space complexity is often relatively small , as near the leaves it is not possible to use the full arity as there are not enough objects left .e.g. , for and the strings dictionary , gnat requires `` only '' about range table entries .we also ran preliminary experiments on using smaller range tables ( see sec .[ sec : smalltables ] ) . as expected, this reduces the performance , some of which can be bought back by using larger arities ( sometimes the performance is improved a bit ) .the net effect is that using the same space the tree height can be reduced , but the queries become somewhat slower , and this effect increases the smaller the range tables become .we omit the plots . nevertheless , the technique has some promise for external memory implementation , which is a subject of future work . , the other gnat variant use constant arity that results in the same memory consumption as in gnatty , except gnatty fp uses 1/4th of the memory gnatty uses .lc uses linear space .bottom : as above , but gnatty uses . ] , the other gnat variant use constant arity that results in the same memory consumption as in gnatty , except gnatty fp uses 1/4th of the memory gnatty uses .lc uses linear space . bottom : as above , but gnatty uses . ]we have shown several methods how to improve gnat and verfied their practical performance . however , there are many possibilities for further work . *the hyperplane partitioning construction cost can be lowered somewhat by using an auxiliary index to solve the 1-nn queries in step 2 of the construction algorithm , especially for high arities .that is , build 1-nn index for the centers / pivots , and use 1-nn queries for each object to find its associated center .* bulk loading the tree can also be lazy , i.e. a branch of the tree can be built only on demand , when the search algorithm enters it , which amortizes the search and construction costs . *the range tables for the nodes can be also built in the same spirit as the previous item , i.e. any value can be initialized to some default value and the real value is computed when it is needed the first time .this can be also used with the egnat insertion algorithm to amortize its cost ; i.e. new elements are inserted into leaves , which are initially buckets and promoted to full gnat like internal nodes when they becomes full .* gnatty techniques can be used for external memory implementation as well .egnat uses the same arity for all internal nodes ( including root ) , depending on the disk block size .however , the root node can be made ( much ) larger than the other nodes , as it can be kept in main memory all the time .l. mic , j. oncina , e. vidal , a new version of the nearest - neighbor approximating and eliminating search ( aesa ) with linear preprocessing - time and memory requirements , pattern recognition letters 15 ( 1994 ) 917 .
|
geometric near - neighbor access tree ( gnat ) is a metric space indexing method based on hierarchical hyperplane partitioning of the space . while gnat is very efficient in proximity searching , it has a bad reputation of being a memory hog . we show that this is partially based on too coarse analysis , and that the memory requirements can be lowered while at the same time improving the search efficiency . we also show how to make gnat memory adaptive in a smooth way , and that the hyperplane partitioning can be replaced with ball partitioning , which can further improve the search performance . we conclude with experimental results showing the new methods can give significant performance boost . gnat , egnat , aesa , metric space indexing , generalized hyperplane partitioning , ball partitioning
|
the support given by the imaging to medical diagnostics is fundamental during the pathology discovery as well as for biochemical characterization of biological structures - .the imaging methods involve electromagnetic waves in a frequency range that spans from some hz to ghz and over .hence , the understanding of biological structures response to the electromagnetic field is fundamental .the investigation of the interaction between healthy or pathological biological tissues and electromagnetic waves is a hot topic yet .in fact , understanding how an electromagnetic wave interacts with the human body becomes increasingly important if one considers that the most of imaging methods involve scanning of wide areas of the human body , even if only small areas need to be analyzed . in this way , although a wide scanning allows to acquire a big amount of data by single exposition , also areas of the body not interested in the diagnostics are exposed to the waves , so increasing the invasiveness for the patient .for this reason , new imaging systems able to analyze only the area under diagnostics ( confined scanning ) , are becoming more and more investigated .a big input to micro - imaging systems has been given by the micro - electronic technology , which allows the development of systems for the scanning of confined small areas .in fact , emitters and receivers ( sensors ) smaller and smaller can be made , so that the final size of the imaging systems as well as the wave s spot become very small .generally , the acquired data need to be appropriately elaborated extracting the imaging information by means of inverse optimization algorithms ,, . through these algorithmshigh quality information can be extracted from the data acquired by the sensors , even if the quality of the sensor s signal is low , because of their small area which may determine a significant amount of noise . in thispaper a method to elaborate the shape of microstructures for application in the medical field is proposed .the method works with low power waves in the very high frequency ( vhf ) range , in order to achieve a low invasiveness for the human body .the method uses a system endowed with : a microtransmitter to emit a magnetic field , a sensors panel to acquire the spatial distribution of the magnetic field and an elaboration logic to acquire and elaborate the sensor s signals .the microtransmitter radiates the vhf waves by means of a microantenna , which is able to take the shape of the target structure .if the micro - antenna is assumed to be a sequence of thin - wire interconnected short dipoles , then it is possible to reconstruct the micro - structure s image by measuring the spatial distribution of the magnetic field .in fact , the shape reconstruction is possible by estimating the location of thin - wire antenna against the sensors panel .the recognition problem of the thin - wire antenna s location by magnetic field can be addressed as an inverse problem .the thin - wire antenna is supposed to be a sequence of linear segments : given a model for the characterization of the magnetic field at a point in space ( forward problem ) and given a set of measurements of the magnetic field amplitude , it is possible to solve the inverse problem in terms of the distance of the antenna from the sensors panel . in this paper , preliminary results about the previous basic idea ,are reported concerning two simulated scenarios .namely , the spatial distribution of the emitted magnetic field is simulated through a numerical model based on the method of moments ( mom ) , where the first kind fredhlom s integral equation is solved by the point matching procedure - .the spatial distribution of the magnetic field is evaluated on an area equivalent to the area of the sensors panel .these magnetic field values are used as the measured input for the inverse problem .the levenberg - marquart algorithm , is considered in solving the inverse problem by means of the minimization of the euclidean distance between the measured field and the field generated by a given configuration of thinwire piecewise antenna .the numerical procedure involved by the algorithm rounds the entries of the hessian matrix , step by step , and the location of the antenna s segments can be estimated with high precision , so obtaining the image of the shape taken by the antenna .the paper is structured as follows . in section 2 the mathematical framework about the proposed methodis presented . in section 3 some numerical results about reconstruction examplesare discussed , then a conclusion about the method results and ongoing work complete the paper .the shape of a biological thin micro - structure can be estimated by means of an appropriate embedded emitting antenna and by measuring the spatial distribution of the opportune radiated electromagnetic field component .in particular , the magnetic field may be the best choice together with the selection of the appropriate work frequency range .the proposed approach is a typical inverse problem : the measured spatial distribution of the magnetic field emitted by a vhf thinwire antenna ( i.e. with radius less than one hundredth of the maximum work wavelength ) is the input , and the source shape reconstruction is the final target .the antenna is assumed to be a piecewise linear structure , i.e. a sequence of branches with uniform radius and conductivity .the antenna is fed at the point by a sinusoidal current source with known frequency and amplitude .the spatial distribution of the magnetic field is detected by a sensors panel with sensors distributed on its surface .as a first approximation , the surrounding medium is supposed to be homogeneous , isotropic , with conductivity , electric permittivity and magnetic permeability . under these assumptions ,the shape reconstruction involves the minimization of an objective function shown in equation ( 1 ) , by using an iterative procedure . roughly speaking, step by step the iterative procedure looks for a set of segments coordinates that minimize the difference between the computed and measured magnetic field at sensor points . in equation ( [ minimo ] ) ^{t } $ ] is the vector of the cartesian coordinates of the antenna segments ends to be estimated , is the vector of the magnetic field values at sensors points computed by assuming that the segments ends are located at p , and is the vector with the measured magnetic field values .the values of are calculated by the forward solver .the forward solver computes the amplitude of the magnetic field at a set of points , for a given signal source and antenna s characteristics .more precisely , the current distribution along the antenna is evaluated by solving an appropriate integral equation in frequency domain , derived from maxwell s equations .then , by using the relation that link the currents to the magnetic field , this latter can be calculated at the sensors points .the equations of the forward model is solved numerically . for the problem addressed in this case , the moments method ( mom ) via point - matching procedure in the frequency domain , is performed .note that , by this method the integral equations involved include the boundary conditions , so that only the emitter have to be discretized but not the problem s domain .this means that each antenna s branch needs to be split in a finite number of linear segments - .the problem formulation leads to a first kind fredholm s integral equation .in fact , by expressing the electric field with the retarded magnetic vector potential , the source frequency and the scalar potential , as shown in equation ( 2 ) : by introducing the boundary conditions by means of the per - unit - length surface impedance , the following equation holds : in which _ tg _ indicates the component tangential to the wire surface , is the longitudinal current flowing into the conductor , concentrated on its axis as the unit vector ) because of the thin - wire assumption , , is the unit vector tangential to the conductor s surface , as shown in figure 1 .[ fig : fig1 ] by using the relation between the magnetic vector potential and the conduction current density , and the relation between the scalar electric potential and the free electric charges density , the well known electric field integral equation ( efie ) is obtained .then , by integrating the efie along the surface of the conductor , the following modified efie is obtained , : equation ( [ efiem ] ) is a general relation that depends only on thelongitudinal current and on geometrical quantities of the conductors constituting the thin - wire structures to be analyzed .in fact , is the length of the exciting conductor , _l _ is the length of the induced conductor , and are space position vectors of the observation and source points , respectively , is the green s function in an unbounded region . the quantity take into account the complex medium permittivity and is the wave number .as already underlined , the forward problem , represented by equation ( [ efiem ] ) , is numerically solved by splitting the thin - wire antenna into a finite number of linear segments of length . in scientific literaturenumerous results , show that an acceptable accuracy can be obtained for the solution of equation ( 4 ) , by assuming and a linear distribution of current along each segment . in this way ,a linear system of order is obtained with _n _ as the number of segments .once the currents are computed , the magnetic field components in the surrounding medium are given by the dipole theory , by superposing the effects of all segments , .as discussed above , once the direct solver is obtained , the inverse problem can be then approached . in figure 2 the blocks diagram of the algorithm used to solve the inverse problemis shown .[ fig : fig2 ] the unknown end - points segments coordinates s are obtained through an iterative procedure which , at each step , computes the correction factors needed to obtain the new set of coordinates .the correction factors are evaluated by the optimization functionality , on the basis of the difference in _2_-norm between the measured fields and the fields computed by the forward solver .since the problem is strictly nonlinear , the solver of the inverse problem was based on the levenberg - marquardt ( lm ) algorithm [ 15 ] . in brief , the lm algorithm starts from an initial set of reasonable end - points segments coordinates : this set is then updated at each step _ k _ by adding the solutions of the linear system shown in equation ( [ lm ] ) \right)\delta \textbf{s}^{(k)}= -\textbf{g}^{(k)^{t}}\textit{f}(\textbf{s}^{(k)})\ ] ] in equation ( [ lm ] ) the matrix is an approximation of the hessian matrix of at step and is equal to , with as the gradient of the error function at step , while is an adaptive parameter .once * * is known , the new coordinates are calculated by the following equation : order to validate the capabilities of the proposed method , some numerical experiments concerning the position estimation estimation of the antenna s branches are carried out .the thin - wire antenna is assumed to be made by a nickel titanium thin wire 0.1 mm in radius , 2.1 cm in length with an electrical conductivity of s / m . in this waythe assumption is verified .the nickel - titanium alloy is selected because it is the most diffused alloy in the biomedical applications .it has to be underlined that this type of antenna can be effectively realized in practice and some of the human cavities may be investigated .moreover , the number of branches for the antenna is set to , with shape shown in figure 3 and figure 4 by the red dashed line with squared markers. the antenna is fed by a sinusoidal current source with frequency equal to 100 mhz , and amplitude spacing in the following set of values : 0.1 ma , 1 ma , 5 ma , 25 ma and 50 ma .it has to be underlined that all these current values as well as the selected frequency can be well tolerated by the human tissues .+ the ground surface of the antenna lies on the plane as showed in figure 3 and figure 4 , and the feed point is placed at a distance of 2 mm from the ground plane .[ fig : fig3 ] two sets of 12 sensors are considered .the sensors are assumed to be sensitive to the component along the axis of the magnetic field and placed on a flat surface in a first case study . +a semi - cylindrical surface sensors panel is then also considered ( figure 4 ) .the flat sensor panel is 3.5 cm height and 1.5 cm width , while the semi - cylindrical one has 1.5 cm radius and 3.5 cm height .[ fig : fig4 ] for the above described set up , the measured values are calculated by means of the mom based forward solver described in section 2 .the initial set of end - points segments coordinates for the lm optimization algorithm is that of a straight thin - wire antenna 2.1 cm length and perpendicular to the ground plane . from these coordinates the true position of each branch is then estimated .figure 3 shows the estimated position of the conductor in the case of the flat sensors panel .the picture shows that the distance between the estimated position , the black dashed line with plus markers , and the true position , the red dashed line with square markers , is close to zero .[ tab : flatpanel ] .relative error and number of iteration required with the flat sensors panel [ cols="^,^,^",options="header " , ] the picture shows as the distance between the estimated position , the black dashed line with plus markers , and the true position , the red dashed line with square markers , is close to zero . for this scenario ,the -norm relative error between true and estimated coordinates for a given level of source current , and the number of iterations are shown in table 2 .the simulations results show that the error is close to zero also in this case .note that a lower relative error has been reached for the flat sensors panel : this result can be justified because with the semi - cylindrical sensors panel the measured magnetic field components contain more information about the real field distribution .moreover the number of iterations decreases as the source current amplitude increases for a same sensors configuration .however , the shape of the sensors panel influences also the convergence of the optimization algorithm : the semi - cylindrical sensors panel outperforms the flat sensors panel , especially for low levels of antenna source currents .this result suggests that the number of iterations can be reduced via a proper sensors panel design , without compromising the precision and without increasing the antenna currents , this latter aspect is fundamental for the invasiveness of the method .in this paper , a numerical method to evaluate the shape of micro - structures for bio - medical application with a very low invasiveness for the human body , is proposed .a flexible thinwire antenna radiates the vhf waves and then , by numerically solving a typical inverse problem , the estimation of the antenna location enables to reconstruct the micro - structure s image. the typical inverse problem is solved , and first simulation results assess the validity and the robustness of the proposed approach .+ g. ala , g. di blasi and e. francomano , `` a numerical meshless particle method in solving the magnetoencefalography forward problem , '' international journal of numerical modeling : electronic networks , devices and fields , vol .25 , no . 5 - 6 , pp .428440 , february 2012 . g. ala , and e. francomano , `` a multi - sphere particle numerical model for non - invasive investigations of neuronal human brain activity .method in solving the magnetoencefalography forward problem , '' progress in electromagnetics research letters , vol .143 - 153 , 2013 .p. di barba , m.e .mognaschi , g. nolte , r. palka and a. savini , `` source identification based on regularization and evolutionary computing in biomagnetism , '' compel , vol .4 , pp . 10221032 , 2010 .a. fhager , p. hashemzadeh and m. persson , `` reconstruction quality and spectral content of an electromagnetic time - domain inversion algorithm , '' ieee transactions on biomedical engineering , vol .53 , no . 8 , pp .1594 1604 , august 2006 . r.d .foster , d.a .pistenmaa , t.d .solberg , `` a comparison of radiographic techniques and electromagnetic transponders for localization of the prostate , '' radiation oncology , vol .7 , no . 1 , 2012 .z. zakaria , r.a . rahim , m.s.b .mansor , s. yaacob , n. m. n. ayob , s.z.m .muji , m.h.f .rahiman and s.m.k.s .aman , `` advancements in transmitters and sensors for biological tissue imaging in magnetic induction tomography , '' sensors ( switzerland ) , vol . 12 , no . 6 , pp . 7126 7156 , 2012. t. williams , j. sill and e. fear , `` breast surface estimation for radar based breast imaging systems , '' ieee transactions on biomedical engineering , vol .55 , no . 6 , pp . 16781686 , june 2008 .davidson , u. jakobus and m.a .stuchly , `` human exposure assessment in the near field of gsm base - station antennas using a hybrid finite element / method of moments technique , '' ieee transactions on biomedical engineering , vol .2 , pp . 224233 , february 2003 .g. ala , p. buccheri , e. francomano , a. tortorici , `` advanced algorithm for transient analysis of grounding systems by moments method , '' in iee conference publication , iee , ed ., 1994 , pp . 363366 .g. ala , e. francomano and a. tortorici , `` iterative moment method for electromagnetic transients in grounding systems on cray t3d , '' lecture notes in computer science , vol .1041 , no .1 , pp . 916 , august 1996 . g. ala , e. francomano and a. tortorici , , `` the method of moments for electromagnetic transients in grounding systems on distributed memory multiprocessors , '' parallel algorithms and applications , vol .3 , pp . 213233 , 2000 .g. ala and m.l .di silvestre , `` a simulation model for electromagnetic transients in lightning protection systems , '' ieee transactions on electromagnetic compatibility , vol .539554 , july 2002 .g. ala , m.l .di silvestre , e. francomano and a. tortorici , `` an advanced numerical model in solving thin - wire integral equations by using semi - orthogonal compactly supported spline wavelets , '' ieee transactions on electromagnetic compatibility , vol .2 , pp . 218 228 , may 2003 . g. ala , m.l .di silvestre , e. francomano and a. tortorici , `` wavelet - based efficient simulation of electromagnetic transients in a lightning protection system , '' ieee transactions on magnetics , vol .3 , pp . 12571260 , 2003 . g. ala , m.c .di piazza , g. tine , f. viola and g. vitale , `` evaluation of radiated emi in 42 v vehicle electrical systems by fdtd simulation , '' ieee transactions on vehicular technology , vol .1477 1484 , july 2007 .w.h . press ,teukolsky , w.t .vetterling and b.p .flannery , numerical recipes : the art of scientific computing .cambridge : cambridge university press , 2007 .e. francomano , a. tortorici , c. lodato and s. lopes , `` an algorithm for optical flow computation based on a quasi - interpolant operator , '' computing letters , vol .2 , no . 1 - 2 , pp .93106 , may 2006 .
|
imaging techniques give a fundamental support to medical diagnostics during the pathology discovery as well as for the characterization of biological structures . the imaging methods involve electromagnetic waves in a frequency range that spans from some hz to ghz and over . most of these methods involve scanning of wide human body areas even if only small areas need to be analyzed . in this paper , a numerical method to evaluate the shape of micro - structures for application in the medical field , with a very low invasiveness for the human body , is proposed . a flexible thin - wire antenna radiates the vhf waves and then , by measuring the spatial magnetic field distribution it is possible to reconstruct the micro - structure s image by estimating the location of the antenna against a sensors panel . the typical inverse problem described above is solved numerically , and first simulation results are presented in order to show the validity and the robustness of the proposed approach . method of moments , inverse problem , levenberg - marquardt method , biological microstructures , vhf waves code
|
memristors have revolutionised the material basis of computation and neuromorphic architectures . since the announcement of the first documented two - terminal memristor researchers have been eager to experiment with memristors , but they are difficult to synthesize and not yet commercially available .few looked beyond standard electronic engineering approaches , but those who did uncovered a promising behaviour of living systems .johnsen et al found that conductance properties of sweat ducts in human skin are well approximated by a memristive model , and experimental evidence that flowing blood and leaves exhibits memristive properties was provided by kosta et al . in 2008pershin et al described an adaptive ` learning ' behaviour of slime mould _ physarum polycephalum _ in terms of a memristor model .memristor theory has been applied to neurons and synapses , suggesting that memristance might be useful to explain the process of learning . the plasmodium of _ physarum polycephalum _ ( order _ physarales _ , class _ myxomecetes _ , subclass _ myxogastromycetidae _ ) is a single cell , visible with an unaided eye .the plasmodium behaves and moves as a giant amoeba .it feeds on bacteria , other microbial creatures , spores and micro - particles .structurally _ physarum _ is composed of a semi - rigid gel put down by the living protoplasm , a type of cytosol , within it , and this gel is covered with a protective ` slime ' which gives rise to the plasmodium s colloquial name ` slime mould ' .this protoplasm contains many nuclei which can be described as interacting oscillators .furthermore , the protoplasm undergoes shuttle - transport , switching direction approximately every 50 seconds .thus , when measuring electrical properties of _ physarum _ , we must take care to separate the effects of the living protoplasm , and non - living exteriour gel and slime layer , as well as being aware that the cytosol is a moving and living system which will change over time . in an environment with distributed sources of nutrients the plasmodium forms a network of protoplasmic tubes connecting food sources .the network of protoplasmic tubes developed by _physarum _ shows some signs of optimality in terms of shortest path and proximity graphs . in have used _ physarum _ to make prototypes of massively - parallel amorphous computers _ physarum _ machines capable of solving problems of computational geometry , graph - theory and logic ._ physarum _ machines implement morphological computation : given a problem represented by a spatial configuration of attractants and repellents the _ physarum _ gives a solution by patterns of its protoplasmic network .this limits application domain of _ physarum _ processors to computational tasks with natural parallelism .if we were able to make electronic devices from living _ physarum _ we would be able to construct a full spectrum of computing devices with a conventional architecture .furthermore , to make a set of ` all possible computing devices ' it is enough to make a material implication gate and memristors naturally implement material implication .thus answering the question ` is _ physarum _ a memristor ? ' is a task of upmost priority ._ physarum _ s growth can be directed with chemo - attractants and repellants and it chooses efficient paths between food sources .thus , _ physarum _ could be used to ` design ' efficient circuits .previous preliminary work has shown that _ physarum _ can take - up iron - based magnetic particles , so it could be used to lay down efficient circuits and , if the magnetic effects were detrimental to the physarum , we might expect it to lay down circuits with a good electromagnetic profile , thus , we also investigated the electrical properties of the _ physarum _ tubes with and without these particles .plasmodium of _ physarum polycephalum _ were cultivated on wet absorbent paper ( to keep the humidity level high ) in an aerated , dark environment and fed with oat flakes .the culture was periodically replanted to a fresh substrate .the experimental set - up is shown in figures [ scheme ] .two electrodes ( fig .[ scheme]c ) were stuck to a plastic petri dish mm apart and two islands of 2ml agar ( fig .[ scheme]b ) were placed on each electrode . to perform the experiments ,a _ physarum_-colonised oat - flake was inoculated on one island with a fresh oat flake on the other : _ physarum _ would then colonise the other island ( fig .[ scheme]a ) , linking both electrodes with a single protoplasmic tube ( fig .[ scheme]d ) .this experimental setup has proved to be efficient in uncovering patterns of electrical activity of _ physarum _ and _ physarum _ s response to chemical , optical and tactile stimulation .electrical measurements were performed with a keithley 617 programmable electrometer which allows the measurement of currents from pa-3.5ma .measurements were performed with a voltage range of 50mv ( sample 1 - 13 ) or 100mv ( samples 15 - 22 ) , a triangular voltage waveform and a measurement rate of 0.5s , 1s or 2s : this is the d.c .equivalent to an a.c .voltage frequency of 2mhz , 1mhz or 0.5mhz . as _physarum _ is a living system and can respond , we compared first and second runs across different dishes .the tests were divided into two batches : batch 1 was measured with timesteps of 0.5s , 1s or 2s and a voltage range of and ; batch 2 was measured only with the 2s timestep and a voltage range of - .three different electrode set - ups were tested : thick ( 2 mm ) aluminium wire , thin ( 0.5 mm ) silver wire and thin aluminium mesh ._ physarum _ was also tested for the electrical effect of the uptake of magnetic particles ( fluidmag - d,100 nm , 25mg / ml , chemicell ) by inoculating the source or target oat flakes with particles . as _physarum _ is sensitive to light , all tests were run in the dark . the starting resistance , , the hysteresis ( calculated as in ) and scaled hysteresis , , ( calculated as a ratio of as in ) were calculated for the _ physarum _ that exhibited hysteresis .to control for experimental set - up variation , , and values were compared to the length of the tube , and the electrode separation .the tube lengths were measured from photographs of the first batch of measurements .23 samples were tested ; in 1 sample _ physarum _ did not form a tube across the electrodes and in another the _ physarum _ grew , formed and abandoned the tube before it could be measured : this sample was used as a control for the gel part of the tube .the remaining 21 samples were measured and these comprised of two samples with magnetic nanoparticles on the inoculation electrode , two samples with magnetic nanoparticles on the target electrode the rest were normal _physarum_. three samples were electrically unconnected . of the 11 samples in batch 1 , 2 exhibited good memristance curves, the other 9 exhibited open curves as shown in figure [ fig : freq ] ( these open curves are very similar to those seen in low - voltage tio sol - gel memristor measurements which are indicative of memristive results at high voltage measurements ) . for batch 2 , which were measured at a larger voltage range ( between and ) , 8 out of 8 samples showed good memristive curves as shown in figure [ graphs ] .comparison between the memristance curves shown in figure [ graphs ] and the results for an abandoned tube shown in figure [ fig : abandoned ] shows that the memristance is due to the living _ physarum _ protoplasm .similar to inorganic memristors , this could be due to voltage - driven charge transport .shuttle transport reverses direction around every 50s and this could give rise to a measurable hysteresis , however because these data were measured at for 160 steps the period is 320s which is over 3 times the period of the shuttle transport and , as such , is not the cause of the measured effect .longer time - scale current responses have been observed in d.c .experiments which could be related to the memristive effect .finally , it could be due to a voltage - mediated change in material properties of the protoplasm which has a relaxation time , leading to a memristive hysteresis this seems to be the mostly likely explanation because repeated applications of voltage increased the resistance ( see figure [ fig : freq ] ) .no electrical effect of the different electrode types was seen beyond the mechanical difficulties : the thicker aluminium electrodes put strain on the tube when measured leading to breakages or short - circuits .no discernible electrical effect was seen from the presence of magnetic nanoparticles .the _ physarum _ picked them up and internalised them , but the measured resistance curves were qualitatively the same as those without the particles and within the same range .our other work shows that the nanoparticles are localised within the gel part of the _ physarum _ , so this lack of effect is not because the nanoparticles are not present .one possibility could be that the _ physarum _ might internalise or biofoul the nanoparticles .this effect whereby open loops at low voltages show memristance at high voltages suggests that , like tio sol - gel memristors , the low - voltage open - loop behaviour is related to memristance .these results show that relatively large voltages are needed for the measurement of memristance in _physarum_. the testing process may have resulted in behavioural modification of the _ physarum _ : successive applications of voltage caused the _ physarum _ to abandon the tube as observed by a thinning and lightening of the tube .however _ physarum _ was still alive and active after testing , and the tube abandonment could be due to it exploring the environment for other food sources ( the time spent connecting the two oat - flakes was commensurate with that observed without electrical testing , but due to the high variance in behaviour we can not say if the applied voltage harmed the _ physarum _ ) . as frequencycan effect the size of memristor hysteresis ( due to natural response speed of the system ) the voltage waveform frequency was altered .as increasing the voltage range can turn the ` open - loop ' type of memristor into a pinched hysteresis loop , the lack of pinched hysteresis on the open - loop memristor responses could be due to the chosen voltage range .the effect of frequency is tested in fig .[ fig : freq ] , where three repeats of the same range is tried at the standard and twice the frequency and one larger range is tried at the standard frequency .the shape is qualitatively similar over this voltage range and unaffected by frequency over this voltage range .figure [ fig : freq ] shows that repeated applications of voltage causes the the resistance of the tube to increase ( similar results were observed on the two repeats with other samples ) . as fig .[ fig : abandoned ] shows no repeated resistance change for an empty tube , this suggests that the protoplasm part of _ physarum _ is the material responsible for the observed memristance rather than a chemical or physical change in the structure of the outer parts of the tube .the range of tube lengths found was 6.25 mm to 43 mm , with a mean of 19.71 mm and standard deviation of 10.64 mm : this high deviation is because the _ physarum _ initially explores the space following a chemical gradient to connect the oat flakes .the tube length shortens over time as the _ physarum _ increases the efficiency of connection between food sources , for example , the length of one protoplasmic tube went from 5 mm to 4.26 mm over a day .no correlation was found between : and ; and ; and ; and ; and electrode separation ( graphs not shown ) , this demonstrates that the tube length ( ) or electrode separation can not be used to control the electrical properties and that the starting resistance ( ) is not a predictor for the hysteresis ( and ) : as a comparison , results for sol - gel memristors are given in .the total power used over an - loop and the average power used were also calculated for 11 samples , no correlation was found between the power and , or the power and electrode separation ( graphs not shown ) .thus we can conclude that the variation in electrical response is due to the variation between individual samples of _ physarum _ and not the variation in set up or tube length .the shape of the curve in figure [ graphs]d is interesting , as it shows asymmetry between in the resistance change rate , and has not been observed in our inorganic memristors .the memory - conservation theory of memristance explains memristive effects in terms of an interaction between state - carrying ions , , and conduction ( state - sampling ) electrons .to undergo locomotion , _ physarum _ exhibits shuttle transport where the protoplasm is moved backwards and forwards : ions in the protoplasm would be moved around by this motion and this gives could give rise to a background current .thus , this background current should be included and we investigated this in order to try and understand the shape of the distinctive _ physarum _ memristor ( see figure [ graphs]d ) . a similar approach has been used to model reram , where the electromotive force associated with a ` nanobattery ' is added to a memristor circuit . figure [ fig : circuit ] shows a circuit which could be used to model the situation : this circuit contains a 2-port ` black box ' which we measure .we assume that this 2-port contains a current source ( battery ) because _ physarum _ is alive and uses chemical energy to produce reactions and the motion of the membrane , and a memristor ( or a memristor - resistor in series ) .in fact , from long - term experiments we have seen a slow oscillation with a half - period of around 700 that could fit the description of such a current source , especially as it was observed at , so it is on the same order as our lower current - curve measurements . with the addition of an internal current source, we are now modelling the _ physarum _ as an active memristor ( standard memristors are passive components ) .the current , , is the background current from the living _physarum_. from the circuit in figure [ fig : circuit ] we can write the following expression for the measured current , : where is the current that is driven by the external voltage , .the background battery can either add to or oppose the external power source , and thus the background current is either in the same direction or opposite direction to the driven current , as we use and to represent this , where it is understood that the internal ions may not have the same charge as the electrons and we take to be the direction of increase in total current . is the voltage associated with the internal ` battery ' .if we have a current that adds to our driven current at one point in time and subtracts at another , we would expect to see a non - rotational symmetric memristor curve .from equation [ eq : itot ] we can write the memristive response , , as : where is the memristance due to the motion of ions under the applied voltage that expected from the memory - conservation theory of memristance and the second term is the internal resistance response due to the background current , which we label as .we do nt know what the form of is , but there are two options , we can model it as a sine wave with a period of roughly 700s , or we can model it as a bipolar piece - wise linear ( bpwl ) waveform , which corresponds more to what is observed down a microscope when watching _ physarum _ shuttle transport . as the - curves took a total of to run , we can model the current as being constant over this period , especially if it is the long - time oscillations observed in and thus additive to the memristance current in one direction and subtractive in the other .we can descretise equation [ eq : r ] , to get an expression for the descretised rate of change of memristance , as : for the positive lobe of the plot and for the negative lobe of the plot , see figure [ fig : r - t ] . assuming that the rate of change of the does not change over the memristors range although it does change direction , ( which is an approximation ), we can substitute for and write where is the factorthat is bigger than by and it is equal to 2.88 , i.e. the rate of change of resistance is around 3 times faster on the negative lobe compared to the positive , leading to a non - rotationally - symmetric pinched hysteresis loop .we can calculate the actual rates and from the measured current , which we do by calculating the ` instantaneous ' memristance at each measurement point : and this is shown in figure [ fig : r - t ] . around zero and small values of get large discrepancies , due to the method we re using to calculate the memristance , but over most of the curve we can see that the straight - line approximation of the change in memristance holds pretty well .figure [ fig : r - t]b shows that there are two gradients , a shallower one for the positive loop and a steeper one for the negative loop .if these gradients were equal we would have a standard ( ideal ) memristor curve .the memristor curve is commonly broken up into 4 segments : 1 : ; 2 : ; 3: ; 4: .we chose to fit a straight line to the 1 and 3 segments as they start from the same place ( 0v ) , these lines are shown on the curve and their equations are gradients of 3.1009 and 8.9348 , -intercepts of 4.9696 and -9.9034 ( the negative intercept is obviously unphysical and is a result of approximating and changing by a tangent ) with a norm of residuals of 5.7685 and 5.8028 segments 1 and 3 respectively .we can get a measure of the memristance of the cell s ` internal battery ' from rearranging equation [ eq : gr+]: this gives us a negative slope of -2.91485 with a negative resistance intercept whose modulus is 94% ( where we are taking as the -intercept from the fitted tangent for the first segment ) .negative resistance implies the presence of active components in our test circuit , verifying our approach of treating the cells as possessing an ` internal battery ' .this shows that , at these voltages ( which are close to physiological voltages ) , the cell s internal ` battery ' gives physiological currents close to our driven current .thus , to model living cells over physiological ranges , it seems that active memristors are a better approximation than passive memristors .the results clearly show hysteresis and memristive effects in _ physarum polycephalum_.the frequency and voltage range choice effected the results , we found that a timestep of and a of over 200mv gave the best results . at low voltages ,an open - curve shape was measured instead , which we suspect is the memristance effect when measured at below a threashold voltage . as the memristive effect disappeared when the _ physarum _ moved elsewhere , and an abandoned tube showed a high linear resistance , we conclude that the memristive response is due to the living protoplasm .active memristor models show a promising explanation for the asymmetric shape seen when the memristor current response is below , for higher current responses the internal current is small enough that ignoring it and modelling the _ physarum _ as an ideal memristor is a valid approximation .it is intriguing that _ physarum _ exhibits memristive ability , given that it is a simple biological system that is nonetheless capable of habituation and learning and that neurological components ( synapses , ion pumps ) also exhibit memristance and learning abilities .this could suggest that evolution may have made use of memristance in learning systems .the presence of biological active memristors suggests that biological chaotic circuits could be possible ( active memristors are a common component of chaotic circuits ) , and even that they may have been utilised by evolution .current versus voltage profiles measured for protoplasmic tubes of _p. polycephalum _ exhibit great variability in the magnitude of hysteresis and the location of pitch points .this is to be expected given the fact that slime mould is an ever - changing living entity and although attempts were made to standardise the experimental set up such as the measurement of single protoplasmic tubes across a known electrode gap it proved difficult to precisely control the morphology of the tubes .for example even though the electrode distance can be controlled this does not ensure standardisation of the protoplasmic tubes length or the width .it also proves difficult to control the position and numbers of small sub - branches which may arise during experimental measurements . although these do not usually contact the electrode except for sub branching at the terminal ends , this alteration in morphology is likely to affect the conductivity .thus future research would focus on stabilisation of protoplasmic tubes .stabilisation could be achieved by employing the _ physarum _ s potential for internalisation and re - distribution of conductive and magnetic particles or at least constraining the growing with some kind of scaffolding .there is also the potential that the slime mould could construct an internal scaffold or that this could be induced by appropriately applied external fields , in fact morphological control could be accomplished by application of appropriate fields per se . despite the complications of living electronics , we have demonstrated that it is feasible to implement living memristive devices from slime moulds _p. polycephalum_. we believe that future electronic designs will incorporate growing slime mould networks capable of forming a skeleton of conductive information processing elements as part of integrated computing circuits .the slime mould circuits will allow for a high density of computing elements and very low power consumption . to date the useful lifetime of a slime mouldmemristor is 3 - 5 days .however , future studies on loading and coating of the tubes with functional materials with a dual role of structural re - enforcement such as nano - metallic , nano - magnetic or nano - structured semiconducting particles , conducting polymers etc . should enable us to increase the life span substantially whilst also imparting a diverse range of tunable electronic characteristics .if electronic circuits can be ` grown ' or laid - down from _ physarum _ , it would be very useful for reducing the water requirements and poisonous waste of the electronics industry .gale e. , de lacy costello b. , admatzky a. is spiking logic the route to memristor - based computers ? proceedings of 2013 ieee international conference on electronics , circuits and systems ( icecs ) , ( 2013 ) , 297300 hulsmann n. and wohlfarth - bottermann k.e .spatio - temporal relationships between protoplasmic streaming and contraction activities in plasmodial veins of _ physarum _ polycephalum . cytoviologie .1978 ( 17 ) 317 - 334 .kosta s.p ., kosta y.p . , gaur a. , dube y.m ., chuadhari j.p . , patoliya j. , kosta s. , panchal p. , vaghela p. , patel k. , patel b. , bhatt r. , patel v. new vistas of electronics towards biological ( biomass ) sensors international journal of academic research 3 ( 2011 ) 511526 .valov i.,linn e. , tappertzhofen s.,schmelzer s. van den hurk j. , lentz f. , waser r. nanobatteries in redox - based resistive switches require extension of memristor theory , nature communications , 2013 , 4 , 1771 ( 9pp ) whiting j.g.h . , de lacy costello b. , adamatzky a. towards slime mould chemical sensor : mapping chemical inputs onto electrical potential dynamics of physarum polycephalum.ensors and actuators b : chemical 191 ( 2014 ) 844853 .zamarreno - ramos c. , carmuas l.a.,prez - carrasco j.a.,masquelier t. , serrano - gotarredona t. , linares - barranco b. on spike - timing dependent plasticity , memristive devices and building a self - learning visual cortex frontiers in neuormorphic engineering , 5 , ( 2011 ) , 26 ( 20pp )
|
in laboratory experiments we demonstrate that protoplasmic tubes of acellular slime mould _ physarum polycephalum _ show current versus voltage profiles consistent with memristive systems and that the effect is due to the living protoplasm of the mould . this complements previous findings on memristive properties of other living systems ( human skin and blood ) and contributes to development of self - growing bio - electronic circuits . distinctive asymmetric - curves which were occasionally observed when the internal current is on the same order as the driven current , are well - modelled by the concept of active memristors . _ keywords : memristor , slime mould , bioelectronics , active memristor , physarum _
|
in recent times , problems in the modelling of both nonmagnetic and magnetic stellar atmospheres have emerged that can not be solved by traditional numerically intensive computing .take as an example the calculation of line - blanketed lte stellar atmospheres by means of opacity sampling as done in _atlas12_. even on present - day fast single - processor machines realistic frequency step sizes lead to rather prohibitive cpu times ( see castelli , these proceedings ) .the same applies to the modelling of broadband linear ( bblp ) and circular ( bbcp ) polarisation in sunspots , in the solar network and in magnetic stars , and to full zeeman doppler imaging ( zdi ) .the usual remedies to this software crisis do not excel in imagination : employing coarse frequency grids , approximate formal solvers or milne - eddington atmospheres is a cheap expedient but does not attack the problem at its root . instead of waiting for the next increase in cpu clock rates to be able to run today s models on tomorrow s computers , it would be preferable to go parallel .indeed , parallel architectures are readily available nowadays and languages with parallel constructs provide concurrent execution of program segments .obviously there is no way around the restructuring and at least partial rewriting of existing programs , but is nt this preferable to physically doubtful approximations or to poor frequency sampling ?high performance fortran ( hpf ) would seem the obvious choice for the denizens of the fortran universe , but does the data parallel paradigm of hpf really provide optimum parallelism in spectral line synthesis , be it lte or nlte , polarised or unpolarised ? would hpf really speed up stellar atmosphere calculations with a suitably modified version of _ atlas _ , and what about zdi ? is nt it most likely that we have to rethink our approach in a far more radical way ?supercomputing should not be a synonym for brute number crunching nor should it reduce to the use of a few special instructions and highly specialised subprograms that can take advantage of parallel architectures .supercomputing deserves its name only when it encompasses object oriented software design on the _ appropriate level of abstraction _, when it ensures code safety and reliability , when it provides potentially massively parallel execution of _ large sections _ of the code , and when the most accurate and stable numerical methods are used .[ 0.4 ] in polarised radiative transfer the latter requirement translates into the use of the zeeman feautrier ( zf ) method ( auer et al .tedious to code and prone to bugs in fortran77 , a zf solver constitutes no problem for the ada programmer thanks to the high level abstractions made possible by the use of the ada programming language ; the block tri - diagonal scheme can be written down straightforwardly as given in rees & murphy ( 1987 ) .extensive tests have revealed that especially in the presence of blends the zf solver is ( at constant number of depth points ) up to 5 times more accurate than the delo method ( rees & murphy , 1987 ) as demonstrated in fig . 1 .if a 4000 interval is to be covered , the modelling of bblp and bbcp in a solar - type atmosphere involves opacity sampling over about zeeman subcomponents and formal solutions to achieve the minimum frequency resolution . since it is well known that in the presence of heavy blending milne - eddington based approaches are hopelessly inadequate we have no alternative to the admittedly expensive zf and delo solvers . the only way to be able to afford those relatively slow solvers appears to lie in parallelism on a large scale .you do nt have to visit homepages of astronomical colleagues to know that most of them program in fortran .but is fortran ( or hpf ) of any help in the computational astrophysics problems listed above ? codes of some 1000 fortran statements have been written ab initio in the late 1980s using simplified formal solvers and coarse fixed spatial integration grids to synthesise intensity spectra over intervals a mere 2 wide ; compare this to the hundreds and thousands of ngstrms required for the modelling of broadband polarisation .can such a program be upgraded for the latter purpose ?obviously a minor change wo nt do it but even if one were prepared to restructure large parts of the program , i claim that for very fundamental reasons this is not possible in a purely fortran context. there are no threads of control in _ data parallel _ hpf , so there is no way to directly implement the natural approach , viz . the computation in parallel of the emerging spectrum at each frequency point . forthis one would have to employ posix threads ( pthreads ) , taking care of the individual threads , mutexes , and locks , but this is truly hard and unrewarding work .none of the hpf features promise substantial gains in performance .ada95 and its concurrent constructs , the task types and the protected types are ideally suited for parallelising line synthesis and stellar atmosphere codes .task objects are program entities that can execute concurrently on different nodes ( also in distributed systems ) , protected objects can be used to provide light - weight synchronisation .in the past few years i have developed a new generation of codes in the fields of polarised and unpolarised line synthesis and of zeeman doppler imaging .these codes are all written in ada95 and conform to my definition of supercomputing given above .they incorporate up - to - date astrophysics , deal with realistic atmospheres , provide full treatment of blends involving anomalous zeeman patterns , and offer a choice of accurate and numerically stable formal solvers ( delo , zf ) . on the software side ,maximum reuse of software modules is achieved by information hiding and encapsulation , and by extensive use of generics , of child libraries and of inheritance . and finally ,all codes provide for potentially massive parallelism ; they run with virtually no change on anything from pcs to silicon graphics supercomputers , taking full advantage of resources ranging from 1 processor to 32 processors and more .[ 0.65 ] it is absolutely amazing how congenial the _ control - parallel _ paradigm of ada tasking is to line synthesis and stellar atmosphere problems .no large - scale modifications are needed to convert the sequential version of a program to a parallel version : changing not even 20 expressions in a 3500 loc ( lines of code ) ada program is sufficient to obtain a simple parallel version of a sequential code . at the same time , almost perfect load balance ( distribution of the computations to the various cpus according to their availability ) is achieved in an easy and elegant way through the use of protected objects for synchronisation . in most casesthere is nothing more to do than put the subprogram that is to be executed in parallel into a task , replace those variables that are to be read or updated in mutual exclusion by protected objects , and finally statically create or dynamically allocate as many task objects as processors are available . in the shortest of times ( 2 hours and less ) you are gratified with a parallel program ! numerous examples of how the object oriented and parallel features of ada95 can be employed in scientific computing can be found in stift ( 1998 ) .thanks to a dedicated silicon graphics origin200 server with four r10000 processors , magnetic broadband polarisation in heavily blended spectra is at last revealing some of its secrets. calculations of extensive grids of broadband polarisation as a function of magnetic field strength and direction , of the atmospheric model , and of the wavelength interval have shown that the polarisation signal does not saturate at large field values as in the case of the classical zeeman triplet but may display complex behaviour as can be seen in fig .it thus appears that strong fields do not necessarily lead to a strong polarisation signal , but that rather the opposite can be true .leroy ( 1989 ) has demonstrated that the wavelength dependence of the degree of linear polarisation depends on the magnetic field strength ; his analysis did not include the effects of blending . extending the calculations presented by stift ( 1997 ) by synthesising spectra over the whole visible range for various atmospheres and a grid of magnetic field strengths and field directions ( fig .3 displays the solar case ) allows a systematic investigation of the vs. relation . as franco leone has suggested , this relation could possibly be used as a diagnostic tool for estimating the mean magnetic field modulus of a magnetic ap star .first results indicate that this is indeed the case , the relation appearing to be insensitive to the stellar magnetic geometry and independent of magnetic phase , reflecting only the mean magnetic field modulus .the outlook is fascinating : supercomputing with ada95 provides the means for major advances in the field of magnetic polarisation and stellar atmospheres , combining object orientation with straightforward scalable parallelism . with the technology and thousands of lines of ada code available forfree , computational astrophysics can easily overcome the present software crisis .this work has been made possible by the austrian _ fonds zur frderung der wissenschaftlichen forschung _ under project p12101-ast .additional support came from the _ hochschuljubilumsstiftung der stadt wien_.
|
certain problems in the field of stellar atmospheres , polarised radiative transfer and magnetic field diagnostics can not be addressed by means of traditional sequential programming techniques because cpu times become prohibitive on even the fastest single processor machines when realistic physics and accurate numerical methods are employed . this contribution discusses the question of what kind of supercomputing approach is best suited for the modelling of ap stars , pointing out the superiority of parallel computing with ada95 over high performance fortran in all of the above - mentioned fields .
|
temporal data mining is concerned with finitely many useful patterns in sequential ( symbolic ) data streams .frequent episode discovery , first introduced in , is a popular framework for mining patterns from sequential data .the framework has been successfully used in many application domains , e.g. , analysis of alarm sequences in telecommunication networks , root cause diagnostics from faults log data in manufacturing , user - behavior prediction from web interaction logs , inferring functional connectivity from multi - neuronal spike train data , relating financial events and stock trends , protein sequence classification , intrusion detection , text mining , seismic data analysis etc .the data in this framework is a single long stream of events , where each event is described by a symbolic event - type from a finite alphabet and the time of occurrence of the event .the patterns of interest are termed episodes . informally ,an episode is a short ordered sequence of event types , and a _ frequent _ episode is one that occurs often enough in the given data sequence .discovering frequent episodes is a good way to unearth temporal correlations in the data .given a user - defined frequency threshold , the task is to efficiently obtain all frequent episodes in the data sequence . an important design choice in frequent episode discovery is the definition of frequency of episodes .intuitively any frequency should capture the notion of the episode occurring many times in the data and , at the same time , should have an efficient algorithm for computing the same .there are many ways to define frequency and this has given rise to different algorithms for frequent episode discovery .in the original framework of , frequency was defined as the number of fixed - width sliding windows over the data that contain at least one occurrence of the episode .another notion for frequency is based on the number of _ minimal _ occurrences .two frequency definitions called _ head frequency _ and _ total frequency _ are proposed in in order to overcome some limitations of the windows - based frequency of . in ,two more frequency definitions for episodes were proposed , based on certain specialized sets of occurrences of episodes in the data .many of the algorithms , such as the winepi of and the occurrences - based frequency counting algorithms of , employ finite state automata as the basic building blocks for recognizing occurrences of episodes in the data sequence .an automata - based counting scheme for minimal occurrences has also been proposed in . the multiplicity of frequency definitions and the associated algorithms for frequent episode discovery makes it difficult to compare the different methods . in this paper , we present a unified view of algorithms for frequent episode discovery under all the various frequency definitions .we present a generic automata - based algorithm for obtaining frequencies of a set of episodes and show that all the currently available algorithms can be obtained as special cases of this method .this viewpoint helps in obtaining useful insights regarding the kinds of occurrences tracked by the different algorithms .the framework also aids in deriving proofs of correctness for the various counting algorithms , many of which are not currently available in literature .our framework also helps in understanding the anti - monotonicity conditions satisfied by different frequencies which is needed for the candidate generation step .our general view can also help in generalizing current algorithms , which can discover only serial or parallel episodes , to the case of episodes with general partial orders and we briefly comment on this in our conclusions .the paper is organized as follows .[ sec : overview ] gives an overview of the episode framework and explains all the currently used frequencies in literature .[ sec : algorithms ] presents our generic algorithm and shows that all current counting techniques for these various frequencies can be derived as special cases .[ sec : proof - of - correctness ] gives proofs of correctness for the various counting algorithms utilizing this unified framework .[ sec : candgen ] discusses the candidate generation step for all these frequencies . in sec .[ sec : discussion ] we provide some discussion and concluding remarks .in this section we briefly review the framework of frequent episode discovery .the data , referred to as an _ event sequence _, is denoted by , where each pair represents an _ event _ , and the number of events in the event sequence is .each is a symbol ( or _ event - type _ ) from a finite alphabet , , and is a positive integer representing the time of occurrence of the event .the sequence is ordered so that , for all .the following is an example event sequence with 10 events : an -node episode , , is defined as a triple , , where , is a collection of nodes , is a partial order on and is a map that associates each node in with an event type from .thus an episode is a ( typically small ) collection of event - types along with an associated partial order .when the order is total , is called a serial episode , and when the order is empty , is called a parallel episode . in this paper , we restrict our attention to serial episodes . without loss of generality, we can now assume that the total order on the nodes of is given by .for example , consider a 3-node episode , , , , with .we denote such an episode by .an occurrence of episode in an event sequence is a map such that for all , and for all with we have . in the example event sequence , the events , and constitute an occurrence of while , and do not .we use ] .an episode is said to be a _subepisode _ of ( denoted ) if all the event - types in also appear in , and if their order in is same as that in .for example , is a 2-node subepisode of the episode while is not .the _ frequency _ of an episode is some measure of how often it occurs in the event sequence .a frequent episode is one whose frequency exceeds a user - defined threshold .the task in frequent episode discovery is to find all frequent episodes .given an occurrence of an -node episode , is called the _ span _ of the occurrence . in many applications , one may want to consider only those occurrences whose span is below some user - chosen limit .( this is because , occurrences constituted by events that are widely separated in time may not represent any underlying causative influences ) .we call any such constraint on span as an _ expiry - time constraint_. the constraint is specified by a threshold , , such that occurrences of episodes whose span is greater than are not considered while counting the frequency. one popular approach to frequent episode discovery is to use an apriori - style level - wise procedure . at level of the procedure , a ` candidate generation 'step combines frequent episodes of size to build candidates ( or potential frequent episodes ) of size using some kind of anti - monotonicity property ( e.g. frequency of an episode can not exceed frequency of any of its subepisodes ) .the second step at level is called ` frequency counting ' in which , the algorithm counts or computes the frequencies of the candidates and determines which of them are frequent .there are many ways to define the frequency of an episode .intuitively , any definition must capture some notion of how often the episode occurs in the data .it must also admit an efficient algorithm to obtain the frequencies for a set of episodes .further , to be able to apply a level - wise procedure , we need the frequency definition to satisfy some anti - monotonicity criterion . additionally , we would also like the frequency definition to be conducive to statistical significance analysis . in this section ,we discuss various frequency definitions that have been proposed in literature .( recall that the data is an event sequence , ) .[ def : windowsbased ] a window on an event sequence , , is a time interval ] is given by .given a user - defined window width , the * windows - based frequency * of is the number of windows of width which contain at least one occurrence of .for example , in the event sequence ( [ eq : example - sequence ] ) , there are windows with window width which contain an occurrence of . the time - window of an occurrence , , of is given by ] , ] . given a window - width , the * head frequency * of is the number of windows of width which contain an occurrence of starting at the left - end of the window and is denoted as .[ def : head ] given a window width , the * total frequency * of , denoted as , is defined as follows . [ def : total ] for a window - width of , the head frequency of in ( [ eq : example - sequence ] ) is .the total frequency of , , in ( [ eq : example - sequence ] ) is because the head frequency of in ( [ eq : example - sequence ] ) is . two occurrences and of are said to be _ non - overlapped _ if either or .a set of occurrences is said to be non - overlapped if every pair of occurrences in the set is non - overlapped .a set , of non - overlapped occurrences of in is _ maximal _ if , where is any other set of non - overlapped occurrences of in .the * non - overlapped frequency * of in ( denoted as ) is defined as the cardinality of a maximal non - overlapped set of occurrences of in .[ def : nonoverlapped ] two occurrences are non - overlapped if no event of one occurrence appears in between events of the other .the notion of a maximal non - overlapped set is needed since there can be many sets of non - overlapped occurrences of an episode with different cardinality .the non - overlapped frequency of in ( [ eq : example - sequence ] ) is .a maximal set of non - overlapped occurrences is and . two occurrences and of are said to be _ non - interleaved _ if either or .a set of occurrences of in is _ non - interleaved _ if every pair of occurrences in the set is non - interleaved .a set of non - interleaved occurrences of in is * maximal * if , where is any other set of non - interleaved occurrences of in .the * non - interleaved frequency * of in ( denoted as ) is defined as the cardinality of a maximal non - interleaved set of occurrences of in .[ def : noninterleaved ] the occurrences and are non - interleaved ( though overlapped ) occurrences of in .together with , these two occurrences form a set of maximal non - interleaved occurrences of in ( [ eq : example - sequence ] ) and thus . two occurrences and of are said to be _ distinct _ if they do not share any two events .a set of occurrences is distinct if every pair of occurrences in it is distinct .a set of distinct occurrences of in is * maximal * if , where is any other set of distinct occurrences of in .the * distinct occurrences - based frequency * of in ( denoted as ) is the cardinality of a maximal distinct set of occurrences of in .[ def : distinct ] the three occurrences that constituted the maximal non - interleaved occurrences of in ( [ eq : example - sequence ] ) also form a set of maximal distinct occurrences in ( [ eq : example - sequence ] ) .+ the first frequency proposed in the literature was the windows based count and was originally applied for analyzing alarms in a telecommunication network .it uses an automata based algorithm called winepi for counting .candidate generation exploits the anti - monotonicity property that all subepisodes are at least as frequent as the parent episode .a statistical significance test for frequent episodes based on the windows - based count was proposed in .there is also an algorithm for discovering frequent episodes with a maximum - gap constraint under the windows - based count .the minimal windows based frequency and a level - wise procedure called minepi to track minimal windows were also proposed in .this algorithm has high space complexity since the exact locations of all the minimal windows of the various episodes are kept in memory .nevertheless , it is useful in rule generation .an efficient automata - based scheme for counting the number of minimal windows ( along with a proof of correctness ) was proposed in .the problem of statistical significance of minimal windows was recently addressed in .an algorithm for extracting rules under a maximal gap constraint and based on minimal occurrences has been proposed in . in the windows - based frequency ,the window width is essentially an expiry - time constraint ( an upper - bound on the span of the episodes ) .however , if the span of an occurrence is much smaller than the window width , then its frequency is artificially inflated because the same occurrence will be found in several successive sliding windows .the head frequency measure , proposed in , is a variant of the windows - based count intended to overcome this problem .based on the notion of head frequency , presents two algorithms minepi+ and emma .they also point out how head frequency can be a better choice for rule generation compared to the windows - based or the minimal windows - based counts . under the head frequency count , however , there can be episodes whose frequency is higher than some of their subepisodes ( see for details ) . to circumvent this , propose the idea of total frequency .currently , there is no statistical significance analysis based on head frequency or total frequency .an efficient automata - based counting algorithm under the non - overlapped frequency measure ( along with a proof of correctness ) can be found in . a statistical significance test for the same is proposed in .however , the algorithm in does not handle any expiry - time constraints .an efficient automata - based algorithm for counting non - overlapped occurrences under expiry - time constraint was proposed in though this has higher time and space complexity than the algorithm in .no proofs of correctness or statistical significance analysis are available for non - overlapped occurrences under an expiry - time constraint .algorithms for frequent episode discovery under the non - interleaved frequency can be found in .no proofs of correctness are available for these algorithms .another frequency measure we discuss in this paper is based on the idea of distinct occurrences .no algorithms are available for counting frequencies under this measure .the unified view of automata - based counting that we will present in this paper can be readily used to design algorithms for counting distinct occurrences of episodes .in this section , we present a generic algorithm for obtaining frequencies of episodes under the different frequency definitions listed in sec . [sec : frequencies - of - episodes ] .the basic ingredient in all the algorithms is a simple finite state automaton ( fsa ) that is used to recognize ( or track ) an episode s occurrences in the event sequence . the fsa for recognizing occurrences of is illustrated in fig . [ fig : automaton ] . in general, an fsa for an -node serial episode {\ensuremath{\rightarrow}}\alpha[2]{\ensuremath{\rightarrow}}\dots{\ensuremath{\rightarrow}}\alpha[n] ] , .the state is where is a null symbol .intuitively , if the fsa is in state ) ] ; if we now encounter an event of type ] .the state is the accepting state because when an automaton reaches this state , a full occurrence of the episode is tracked .\(0 ) ; ( 1 ) [ right = of 0 ] ; ( 2 ) [ right = of 1 ] ; ( 3 ) [ right = of 2 ] ; ( 0 ) edge node ( 1 ) edge [ loop above ] node \ ( ) ( 1 ) edge node ( 2 ) edge [ loop above ] node \ ( ) ( 2 ) edge node ( 3 ) edge [ loop above ] node \ ( ) ( 3 ) edge [ loop above ] node ( ) ; we first explain how these fsa can be used for obtaining all the different types of frequencies of episodes before presenting the generic algorithm .while discussing various algorithms , we represent any occurrence by ] and ( ii ) ] after .[ def : earliest transiting ] _it is easy to see that all occurrences tracked by algorithm no are earliest transiting .let denote the set of all earliest transiting occurrences of a given episode .we denote the occurrence ( as per the lexicographic ordering of occurrences ) in as .there are 6 earliest transiting occurrences of in .they are ] , ] , ] .the earliest transiting occurrences tracked by the no algorithm are and . while the algorithm no is very simple and efficient , it can not handle any expiry - time constraint .recall that the expiry - time constraint specifies an upper - bound , , on the span of any occurrence that is counted .suppose we want to count with .both the occurrences tracked by no have spans greater than and hence the resulting frequency count would be zero . however , is an occurrence which satisfies the expiry time constraint .algorithm no can not track because it uses only one automaton per episode and the automaton has to make a state transition as soon as the relevant event - type appears in the data . to overcome this limitation ,the algorithm can be modified so that a new automaton is initialized in the start state , whenever an existing automaton moves out of its start state .all automata make state transitions as soon as they are possible .each such automaton would track an earliest transiting occurrence . in this process, two automata may reach the same state . in our example , after seeing , the second and third automata to be initialized for , would be waiting in the same state ( ready to accept the next in the data ) .clearly , both automata will make state transitions on the same events from now on and so we need to keep only one of them .we retain the newer or most recently initialized automaton ( in this case , the third automaton ) since the span of the occurrence tracked by it would be smaller . when an automaton reaches its final state , if the span of the occurrence tracked by it is less than , then the corresponding frequency is incremented and all automata of the episode except the one waiting in the start state are retired .( this ensures we are tracking only non - overlapped occurrences ). when the occurrence tracked by the automaton that reaches the final state fails the expiry constraint , we just retire the current automaton ; any other automata for the episode will continue to accept events . under this modified algorithm , in , the first automaton that reaches its final state tracks which violates the expiry time constraint of .so , we drop only this automaton .the next automaton that reaches its final state tracks .this occurrence has span less than .hence we increment the corresponding frequency count and retire all current automata for this episode .since there are no other occurrences non - overlapped with , the final frequency would be 1 .we denote this algorithm for counting the non - overlapped occurrences under an expiry - time constraint as no - x .the occurrences tracked by both no and no - x would be earliest transiting . note that several earliest transiting occurrences may end simultaneously .for example , in , , and all end together at .both and form maximal sets of non - overlapped occurrences .sometimes ( e.g. when determining the distribution of spans of occurrences for an episode ) we would like to track the _ innermost _ one among the occurrences that are ending together . in this example , this means we want to track the set of occurrences .this can be done by simply omitting the expiry - time check in the no - x algorithm .( that is , whenever an automaton reaches final state , irrespective of the span of the occurrence tracked by it , we increment frequency and retire all other automata except for the one in start state ) .we denote this as the no - i algorithm and this is the algorithm proposed in . in no - i , if we only retire automata that reached their final states ( rather than retire all automata except the one in the start state ) , we have an algorithm for counting minimal occurrences ( denoted mo ) . in our example , the automata tracking , and are the ones that reach their final states in this algorithm .the time - windows of these occurrences constitute the set of all minimal windows of in .expiry time constraints can be incorporated by incrementing frequency only when the occurrence tracked has span less than the expiry - time threshold .the corresponding expiry - time algorithm is referred to as mo - x .the windows - based counting algorithm ( which we refer to as wb ) is also based on tracking earliest transiting occurrences .wb also uses multiple automata per episode to track minimal occurrences of episodes like in mo .the only difference lies in the way frequency is incremented .the algorithm essentially remembers , for each candidate episode , the last minimal window in which the candidate was observed .then , at each time tick , effectively , if this last minimal window lies within the current sliding window of width , frequency is incremented by one .this is because , an occurrence of episode exists in a given window if and only contains a minimal window of .it is easy to see that head frequency with a window - width of is simply the number of earliest transiting occurrences whose span is less than .thus we can have a head frequency counting algorithm ( referred to here as hd ) that is similar to mo - x except that when two automata reach the same state simultaneously we do not remove the older automaton .this way , hd will track all earliest transiting occurrences which satisfy an expiry time - constraint of . for and for episode , hd tracks , , and and returns a frequency count of . the total frequency count for an episode is the minimum of the head frequencies of all its subepisodes ( including itself ) .this can be computed as the minimum of the head frequency of and the total frequency of its -suffix subepisodes which would have been computed in the previous pass over the data .( see for details ) .the head frequency counting algorithm can have high space - complexity as all the time instants at which automata make their first state transition need to be remembered .the non - interleaved frequency counting algorithm ( which we refer to as ni ) differs from the minimal occurrence algorithm in that , an automaton makes a state transition only if there is no other automaton of the same episode in the destination state . unlike the other frequency counting algorithms discussed so far , such an fsa transition policy will track occurrences which are not necessarily earliest transiting . in our example , until the event in the data sequence , both the minimal and non - interleaved algorithms make identical state transitions .however , on , ni will not allow the automaton in state to make a state transition as there is already an active automaton for in state which had accepted earlier .eventually , ni tracks the occurrences ] , ] .while there are no algorithms reported for counting distinct occurrences , we can construct one using the same ideas . such an algorithm ( to be called as do )differs from the one for counting minimal occurrences , in allowing multiple automata for an episode to reach the same state .however , on seeing an event which multiple automata can accept , only one of the automata ( the oldest among those in the same state ) is allowed to make a state transition ; the others continue to wait for future events with the same event - type as to make their state transitions .the set of maximal distinct occurrences of in are , ] , ] which are the ones tracked by this algorithm .we can also consider counting _ all _ occurrences of an episode even though it may be inefficient .the algorithm for counting _ all _ occurrences ( referred to as the ao ) allows all automata to make transitions whenever the appropriate events appear in the data sequence .however , at each state transition , a copy of the automaton in the earlier state is added to the set of active automata for the episode . from the above discussion, it is clear that by manipulating the fsa ( that recognize occurrences ) in different ways we get counting schemes for different frequencies .the choices to be made in different algorithms essentially concern when to initiate a new automaton in the start state , when to retire an existing automaton , when to effect a possible state transition and when ( and by how much ) to increment the frequency .we now present a unified scheme incorporating all this in _algorithm [ algo : unified ] _ for obtaining frequencies of a set of serial episodes .this algorithm has five boolean variables , namely , transit , copy - automaton , join - automaton , increment - freq and retire - automaton .the counting algorithms for all the different frequencies are obtained from this general algorithm by suitably setting the values of these boolean variables ( either by some constants or by values calculated using the current context in the algorithm ) .tables [ tab : transit ] [ tab : retire - automata ] specify the choices needed to obtain the algorithms for different frequencies .( a list of all algorithms is given in table [ tab : various - counts ] ) .as can be seen from our general algorithm , when an event type for which an automaton is waiting is encountered in the data , the the automaton can accept it only if the variable transit is true .hence for all algorithms that track earliest transiting occurrences , transit will be set to true as can be seen from table [ tab : transit ] . for algorithms ni anddo where we allow the state transition only if some condition is satisfied . the condition copy - automaton ( table [ tab : copy - automaton ] )is for deciding whether or not to leave another automaton in the current state when an automaton is transiting to the next state . except for no and ao, we create such a copy only when the currently transiting automaton is moving out of its start state . in nowe never make such a copy ( because this algorithm uses only one automaton per episode ) while in ao we need to do it for every state transition . as we have seen earlier , in some of the algorithms , when two automata for an episode reach the same state , the older automaton is removed .this is controlled by join - automaton , as given by table [ tab : join - automaton ] .increment - frequency ( table [ tab : increment - freq ] ) is the condition under which the frequency of an episode is incremented when an automaton reaches its final state .this increment is always done for algorithms that have no expiry time constraint or window width . for the others we increment the frequency only if the occurrence tracked satisfies the constraint .retire - automata condition ( table [ tab : retire - automata ] ) is concerned with removal of all automata of an episode when a complete occurrence has been tracked .this condition is true only for the non - overlapped occurrences - based counting algorithms .apart from the five boolean variables explained above , our general algorithm contains one more variable , namely , inc , which decides the amount by which frequency is incremented when an automaton reaches the final state .its values for different frequency counts are listed in table [ tab : inc ] .for all algorithms except wb , we set .we now explain how frequency is incremented in wb . to count the number of sliding windows that contain at least one occurrence of the episode , whenever a new minimal occurrence enters a sliding window, we can calculate the number of consecutive windows in which this new minimal occurrence will be found in .for example , in , with a window - width of , consider the first minimal occurrence of , namely , the occurrence constituted by events , , and .the first sliding window in which this occurrence can be found is ] .when this first minimal occurrence enters the sliding window ] , and hence , as per the _ else _ condition in _ table [ tab : inc ] _ , the is incremented by .similarly , when the second minimal occurrence enters the sliding window ] , with the second minimal window still occurring within this window .this third minimal occurrence remains in consecutive sliding windows until ] , where , ] , using dynamic programming .the algorithm , after processing , stores the row of this matrix .the dynamic programming recursion helps compute the row of this matrix from its row .whenever >s[i-1,n] ] the window of .given that is not a minimal window , we need to show that .since is not a minimal window , one of its proper sub - windows contains an occurrence , say , , of this episode .that means if starts at then it must end before .but , since is earliest transiting , any occurrence starting at the same event as can not end before . thus we must have .this means , by lemma [ lemma : power ] , since is earliest transiting , we can not have .since the window of has to be contained in the window of , we thus have . by definition, will start at the earliest possible position after . since there is an occurrence starting with we must have .now , since is earliest transiting , it can not end after .thus we must have .also , can not end earlier than because both are earliest transiting .thus , we must have .this completes proof of lemma .[ remark:4 - 2 ] this lemma shows that any et occurrence such that is a minimal occurrence .the converse is also true .consider a minimal window ] after and if it is also after then the fact that both and are et occurrences should mean which contradicts that is minimal .hence and are non - interleaved .thus , given the sequence of minimal windows , the earliest transiting occurrences from each of these minimal windows gives a sequence of ( same number of ) non - interleaved occurrences .this leads to as stated earlier in ( [ eq : all - f - relationships ] ) .the no - x algorithm can be viewed as a slight modification to the mo algorithm . as in the mo algorithm, we always have an automaton in the start state and all automata make transitions as soon as possible and when an automaton transits into a state occupied by another , the older one is removed . however , in the no - x algorithm, the increment - freq variable is true only when we have an occurrence satisfying constraint .hence , to start with , we look for the first minimal occurrence which satisfies the expiry time constraint and increment frequency . at this point ,( unlike in the mo algorithm ) we terminate all automata except the one in the start state since we are trying to construct a non - overlapped set of occurrences .then we look for the next earliest minimal occurrence ( which will be non - overlapped with the first one ) satisfying expiry time constraint and so on .since minimal occurrences locally have the least time span , this strategy of searching for minimal occurrences satisfying expiry time constraint in a non - overlapped fashion is quite intuitive .let denote the sequence of occurrences tracked by the no - x algorithm ( for an -node episode ) .then the following property of is obvious .[ property : algo3 ] is the earliest minimal occurrence satisfying expiry time constraints . is the first minimal occurrence ( satisfying expiry time constraint ) that starts after .there is no minimal occurrence satisfying expiry time constraint which starts after .[ theorem : maximality non - overlap constraints ] _ is a maximal non - overlapped sequence satisfying expiry time constraint . _ consider any other set of non - overlapped occurrences satisfying expiry constraints , = such that .let then we first show suppose .consider the earliest transiting occurrence starting from .this ends at or before by lemma [ lemma : power ] . among all et occurrences that end at the same event as ,the last one ( under the lexicographic ordering ) is a minimal occurrence by lemma [ lemma : minimal_et ] .its window is contained in that of which satisfies the expiry time constraint .hence we have found a minimal occurrence satisfying expiry constraint ending before which contradicts the first statement of property [ property : algo3 ] .hence .now applying the same argument to the data stream starting with the first event after , we get and so on and thus can conclude .this shows that no other set of non - overlapped occurrences can have more number of occurrences than those in .hence , is maximal .if we choose equal to the time span of the data stream , the no - x algorithm reduces to the no - i algorithm because every occurrence satisfies expiry constraint .hence proof of correctness of no - i algorithm is immediate .we now explain the relation between the sets of occurrences tracked by the no and no - i algorithms . as proved in the no algorithm ( which uses one automaton per episode ) , tracks a maximal non - overlapped sequence of occurrences , say , .since the no - i algorithm has no expiry time constraint , it also tracks a maximal set of non - overlapped occurrences . among all the et occurrences that end at ,let be the last one ( as per the lexicographic ordering ) .then the occurrence tracked by the no - i algorithm would be as we show now . since would be the first et occurrence , it is clear from our discussion in the previous subsection that the first occurrence tracked by the mo algorithm would be . as is easy to see, the mo and no - i algorithms would be identical till the first time an automaton reaches the accepting state .hence would be the first occurrence tracked by the no - i algorithm .now the no - i algorithm would remove all automata except for the one in the start state .hence , it is as if we start the algorithm with data starting with the first event after .now , by the property of no algorithm , would be the first et occurrence in this data stream and hence would be the first minimal window here . hence it is the second occurrence tracked by no - i and so on .the above also shows that each occurrence tracked by the no - i algorithm is also tracked by the mo algorithm and hence we have as stated in ( [ eq : all - f - relationships ] ) . is also a maximal set of non - overlapping minimal windows as discussed in .the algorithm ni which counts non - interleaved occurrences is different from all the ones discussed so far because it does not track et occurrences . herealso we always have an automaton waiting in the start state .however , the transitions are conditional in the sense that the created automaton makes a transition from state to provided the created automaton is past state after processing the current event .this is because we want the automata to track an occurrence non - interleaved with the occurrence tracked by automaton .let be the sequence of occurrences tracked by ni . from the above discussionit is clear that it has the following property ( while counting occurrences of ) .[ property : algo1 ] _ is the first or earliest occurrence ( of ) . for all and , is the first occurrence of ] after .there is no occurrence of beyond which is non - interleaved with it . _the proof that is a maximal non - interleaved sequence is very similar in spirit to that of the no - x algorithm . as earlier, we can show that given an arbitrary sequence of non - interleaved occurrences = , we have and hence get the correctness proof of ni algorithm .it is easy to verify the correctness of the do algorithm also along similar lines .it appears difficult to extend both the ni and do algorithms to incorporate expiry time constraints . for this we should track a set of occurrences of , where is the first occurrence satisfying and is the next earliest occurrence satisfying that is non - interleaved with ( or distinct from , in case of do ) and so on .note that this need not have to be the earliest occurrence non - overlapped with . at present , there are no algorithms for counting non - interleaved or distinct occurrences satisfying an expiry time constraint . before ending this section , we briefly outline what needs to be done when the data stream contains multiple events having the same time of occurrence .an important thing to note is that two events having the same time of occurrence can not be a part of a serial episode occurrence .hence , each automata can at most accept one event from a set of events having the same occurrence time . with this condition ,the do , ao and hd algorithms go through as before .one would need to process the set of events having the same occurrence time together and allow all the permissible automata to make a one step transition first as done using list in .after this , before processing the set of events with the next occurrence time , we would need to do the multiple automata check for the various candidate episodes and delete the appropriate older automata for algorithms mo , mo - x , no - i and no - x .for the non - interleaved algorithm , one needs to actually back track the transitions which resulted in two automata to coalesce .in this section , we discuss the anti - monotonicity properties of the various frequency counts , which in - turn are exploited by their respective candidate generation steps in the apriori - style level - wise procedure for frequent episode discovery .it is well known that the windows - based , non - overlapped and total frequency measures satisfy the anti - monotonicity property that _ all subepisodes of a frequent episode are frequent_. one can verify that the same holds for the distinct occurrences based frequency too .it has been pointed out in that the head frequency does not satisfy this anti - monotonicity property . for an episode , in general ,only the subepisodes involving ] have to be frequent .the head frequency definition has some limitations in the sense that the frequency of the -node suffix subepisode - node episode \rightarrow \alpha[2 ] \rightarrow \cdots \rightarrow \alpha[n] ] and its -node suffix subepisode is \rightarrow \alpha[k+2 ] \rightarrow \cdots \rightarrow \alpha[n] ] .consider the earliest occurrence of the prefix subepisode starting from and let be its window .any proper sub - window of starting at and containing an occurrence of contradicts lemma [ lemma : power ] . a proper sub - window of containing an occurrence of starting after would contradict the minimality of itself .hence is a minimal window of starting at .we hence conclude that has a frequency of at least . a similar proof works for the suffix subepisode by considering the window of the last occurrence of the suffix subepisode ending at . let be a maximal non - interleaved sequence . from each occurrence ,we choose the sub - occurrence $ ] , of .it is easy to see that this new sequence of occurrences forms a non - interleaved sequence .hence the frequency of is at least .a similar argument works for the suffix episode .hence , for every episode , we extract its suffix , go down the candidate list and search for a block of episodes whose prefix matches this suffix .we form candidates as many as the number of episodes in this matching block .this kind of candidate generation has already been reported in the literature in , and in the context of sequences under inter - event time constraints .the framework of frequent episodes in event streams is a very useful data mining technique for unearthing temporal dependencies from data streams in many applications .the framework is about a decade old and many different frequency measures and associated algorithms have been proposed over the last ten years . in this paperwe have presented a generic automata - based algorithm for obtaining frequencies of a set of candidate episodes .this method unifies all the known algorithms in the sense that we can particularize our algorithm ( by setting values for a set of variables ) for counting frequent episodes under any of the frequency measures proposed in literature .as we showed here , this unified view gives useful insights into the kind of occurrences counted under different frequency definitions and thus also allows us to prove relations between frequencies of an episode under different frequency definitions .our view also allows us to get correctness proofs for all algorithms .we introduced the notion of earliest transiting occurrences and , using this concept , are able to get simple proofs of correctness for most algorithms .this has also allowed us to understand the kind of anti - monotonicity properties satisfied by different frequency measures .while the main contribution of this paper is this unified view of all frequency counting algorithms , some of the specific results presented here are also new . the relationships between different frequencies of an episode ( cf .eqn [ eq : all - f - relationships ] ) , is proved here for the first time .the distinct - occurrences based frequency and an automata - based algorithm for it are novel .the specific proof of correctness presented here for minimal occurrences is also novel .also , the correctness proofs for non - overlapped occurrences based frequency counting under expiry time constraint has been provided here for the first time . in this paperwe have considered only the case of serial episodes .this is because , at present , there are no algorithms for discovering general partial orders under the various frequency definitions .however , all counting algorithms explained here for serial episodes can be extended to episodes with a general partial order structure .we can come up with a similar finite state automata(fsa ) which track the earliest transiting occurrences of an episode with a general partial order structure .for example , consider a partial order episode which represents and occurring in any order followed by a . in order to track an occurrence of such a pattern, the initial state has to wait for either of and . on seeing an it goes to state-1 where it waits only for a ; on the other hand , on seeing a first it moves to state-2 where it waits only for an . then on seeing a in state-1 or seeing a in state-2 it moves into state-3 where it waits for a and so on .thus , in each state in such a fsa , in general , we wait for any of a set of event types ( instead of a single event for serial episodes ) and a given state will now branch out into different states on different event types .with such a fsa technique it is possible to generalize the method presented here so that we have algorithms for counting frequencies of general partial order episodes under different frequencies .the proofs presented here for serial episodes can also be extended for general partial order episodes . while it seems possible , as explained above , to generalize the counting schemes to handle general partial order episodes , it is not obvious what would be an appropriate candidate generation scheme for general partial order episodes under different frequency definitions .this is an important direction for future work . in this paper , we have considered only expiry time constraint which prescribes an upper bound on the span of the occurrence. it would be interesting to see under what other time constraints ( e.g. , gap constraints ) , design of counting algorithms under this generic framework is possible . also , some unexplored choice of the boolean conditions in the proposed generic algorithm may give rise to algorithms for new useful frequency measures .this is also a useful direction of research to explore .bouchra bouqata , christopher d. caraothers , boleslaw k. szymanski , and mohammed j. zaki .vogue : a novel variable order - gap state machine for modeling sequences . in _ proc .european conf .principles and practice of knowledge discovery in databases ( pkdd06 ) _ , pages 4254 , sep 2006 .iwanuma k. , takano y. , and nabeshima h. on anti - monotone frequency measures for extracting sequential patterns from a single very - long sequence .in _ proc .ieee conf .cybernetics and intelligent systems _ , pages 213217 , dec 2004 .srivatsan laxman , p. s. sastry , and k. p. unnikrishnan .a fast algorithm for finding frequent episodes in event streams . in _ proc .acm sigkdd intl conf. knowledge discovery and data mining ( kdd07 ) _ , pages 410419 , aug 2007 .srivatsan laxman , vikram tankasali , and ryen w. white .stream prediction using a generative model based on frequent episodes in event sequences . in _ proc .acm sigkdd intl conf. knowledge discovery and data mining ( kdd09 ) _ , pages 453461 , jul 2008 .nicolas meger and christophe rigotti .constraint - based mining of episode rules and optimal window sizes . in _ proc .european conf .principles and practice of knowledge discovery in databases ( pkdd04 ) _ , september 2004 . anny nag and ada fu , wai - chee . mining freqeunt episodes for relating financial events and stock trends . in _ proc .pacific - asia conf .knowledge discovery and data mining , ( pakdd 2003 ) _ , pages 2739 , 2003 .min - feng wang , yen - ching wu , and meng - feng tsai . exploiting frequent episodes in weighted suffix tree to improve intrusion detection system . in _ proc .intl conf .advanced information networking and applications(aina08 ) _ , pages 12461252 , mar 2008 .
|
frequent episode discovery framework is a popular framework in temporal data mining with many applications . over the years many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery . in this paper we present a unified view of all such frequency counting algorithms . we present a generic algorithm such that all current algorithms are special cases of it . this unified view allows one to gain insights into different frequencies and we present quantitative relationships among different frequencies . our unified view also helps in obtaining correctness proofs for various algorithms as we show here . we also point out how this unified view helps us to consider generalization of the algorithm so that they can discover episodes with general partial orders .
|
legal text , along with other natural language text data , e.g. scientific literature , news articles or social media , has seen an exponential growth on the internet and in specialized systems . unlike other textual data ,legal texts contain strict logical connections of law - specific words , phrases , issues , concepts and factors between sentences or various articles .those are for helping people to make a correct argumentation and avoid ambiguity when using them in a particular case .unfortunately , this also makes information retrieval and question answering on legal domain become more complicated than others .there are two primary approaches to information retrieval ( ir ) in the legal domain : manual knowledge engineering ( ke ) and natural language processing ( nlp ) . in the ke approach ,an effort is put into translating the way legal experts remember and classify cases into data structures and algorithms , which will be used for information retrieval .although this approach often yields a good result , it is hard to be applied in practice because of time and financial cost when building the knowledge base .in contrast , nlp - based ir systems are more practical as they are designed to quickly process terabytes of data by utilizing nlp techniques .however , several challenges are presented when designing such system .for example , factors and concepts in legal language are applied in a different way from common usage .hence , in order to effectively answer a legal question , it must compare the semantic connections between the question and sentences in relevant articles found in advance . given a legal question , retrieving relevant legal articles and deciding whether the content of a relevant article can be used to answer the question are two vital steps in building a legal question answering system .kim et al . exploited ranking svm with a set of features for legal ir and convolutional neural network ( cnn ) combining with linguistic features for question answering ( qa ) task . however , generating linguistic features is a non - trivial task in the legal domain .carvalho et al . utilized n - gram features to rank articles by using an extension of tf - idf . for qa task ,the authors adopted adaboost with a set of similarity features between a query and an article pair to classify a query - article pair into yes " or no " .however , overfitting in training may be a limitation of this method .sushimita et al . used the voting of hiemstra , bm25 and pl2f for ir task .meanwhile , tran et al . used hidden markov model ( hmm ) as a generative query model for legal ir task .kano addressed legal ir task by using a keyword - based method in which the score of each keyword was computed from a query and its relevant articles using inverse frequency . after calculating ,relevant articles were retrieved based on three ranked scores .these methods , however , lack the analysis of feature contribution , which can reveal the relation between legal and nlp domain .this paper makes the following contributions : * we conduct detailed experiments over a set of features to show the contribution of individual features and feature groups .our experiments benefit legal domain in selecting appropriate features for building a ranking model . *we analyze the provided training data and conclude that : ( i ) splitting legal articles into multiple single - paragraph articles , ( ii ) carefully initializing parameters for cnn significantly improved the performance of legal qa system , and ( iii ) integrating additional features in ir task into qa task leads to better results . *we propose to classify a query - article pair into `` yes '' or `` no '' by voting , in which the score of a pair is generated from legal ir and legal qa model . in the following sections , we first show our idea along with data analysis in the context of coliee .next , we describe our method for legal ir and legal qa tasks . after building a legal qa system ,we show experimental results along with discussion and analysis .we finish by drawing some important conclusions .in the context of coliee 2016 , our approach is to build a pipeline framework which addresses two important tasks : ir and qa . in figure [ fig : coliee_proposed_model ] , in training phase , a legal text corpus was built based on all articles .each training query - article pair for lir task and lqa task was represented as a feature vector .those feature vectors were utilized to train a learning - to - rank ( l2r ) model ( ranking svm ) for ir and a classifier ( cnn ) for qa .the red arrows mean that those steps were prepared in advance . in the testing phase , given a query , the system extracts its features and computes the relevance score corresponding to each article by using the l2r model .higher score yielded by svm - rank means the article is more relevant . as shown in figure [ fig : coliee_proposed_model ] , the article ranked first with the highest score , i.e. 2.6 , followed by other lower score articles . after retrieving a set of relevant articles ,cnn model was employed to determine the yes " or no " answer of the query based on these relevant articles .the published training dataset in coliee 2016miyoung2/coliee2016/ ] consists of a text file containing japanese civil code and eight xml files .each xml file contains multiple pairs of queries and their relevant articles , and each pair has a label yes " or no " , which confirms the query corresponding to the relevant articles .there is a total of 412 pairs in eight xml files and 1,105 articles in the japanese civil code file , and each query can have more than one relevant articles . after analyzing the dataset in the civil code file, we observed that the content of a query is often more or less related to only a paragraph of an article instead of the entire content .based on that , each article was treated as one of two types : single - paragraph or multiple - paragraph , in which a multiple - paragraph article is an article which consists of more than one paragraphs .there are 7 empty articles , 682 single - paragraph articles and the rest are multiple - paragraph . based on our findings, we proposed to split each multiple - paragraph article into several independent articles according to their paragraphs .for instance , in table [ tab : multiple - article ] , the article 233 consisting of two paragraphs was split into two single - paragraph articles 233(1 ) and 233(2 ) .after splitting , there are in total 1,663 single - paragraph articles ..[tab : multiple - article ] splitting a multiple - paragraph article into some single - paragraph articles [ cols="^,<",options="header " , ] interestingly , article 653 has the highest relevant score in non - splitting method and rank in splitting approach .the reason for this is that article 653 shares other words like _ mandatary _ , _ mandator _ as well .therefore , it makes retrieval system confuse and yield incorrect order rank . by using single - paragraph ,the system can find more accurately which part of the multiple - paragraph article is associated with the query s content .this work investigates ranking svm model and cnn for building a legal question answering system for japan civil code .experimental results show that feature selection affects significantly to the performance of svm - rank , in which a set of features consisting of _ ( lsi , manhattan , jaccard ) _ gives promising results for information retrieval task . for questionanswering task , the cnn model is sensitive to initial values of parameters and exerts higher accuracy when adding auxiliary features . in our current work ,we have not yet fully explored the characteristics of legal texts in order to utilize these features for building legal qa system .properties such as references between articles or structured relations in legal sentences should be investigated more deeply .in addition , there should be more evaluation of svm - rank and other l2r methods to observe how they perform on this legal data using the same feature set .these are left as our future work .this work was supported by jsps kakenhi grant number 15k16048 , jsps kakenhi grant number jp15k12094 , and crest , jst .zhe cao , tao qin , tie - yan liu , ming - feng tsai , hang li . learning to rank : from pairwise approach to listwise approach . "proceedings of the 24th international conference on machine learning .acm , 2007 .shen , yelong , xiaodong he , jianfeng gao , li deng , and grgoire mesnil . learning semantic representations using convolutional neural networks for web search . "proceedings of the 23rd international conference on world wide web .acm , 2014 .minh - tien nguyen , quang - thuy ha , thi - dung nguyen , tri - thanh nguyen , le - minh nguyen . recognizing textual entailment in vietnamese text : an experimental study ." knowledge and systems engineering ( kse ) , 2015 seventh international conference on .ieee , 2015 .freund , yoav , and robert e. schapire . a desicion - theoretic generalization of on - line learning and an application to boosting ." european conference on computational learning theory . springer berlin heidelberg , 1995 .kiyoun kim , seongwan heo , sungchul jung , kihyun hong , and young - yik rhim .`` an ensemble based legal information retrieval and entailment system '' , tenth international workshop on juris - informatics ( jurisin ) , 2016 ( submission i d : * ilis7 * ) daiki onodera and masaharu yoshioka .`` civil code article information retrieval system based on legal terminology and civil code article structure '' , tenth international workshop on juris - informatics ( jurisin ) , 2016 ( submission i d : * hukb * ) mi - young kim , randy goebel , yoshinobu kano , and ken satoh .`` coliee-2016 : evaluation of the competition on legal information extraction and entailment '' , tenth international workshop on juris - informatics ( jurisin ) , 2016 mi - young kim , ying xu , yao lu and randy goebel , `` legal question answering using paraphrasing and entailment analysis '' , tenth international workshop on juris - informatics ( jurisin ) , 2016 ( submission i d : * uofa * ) ryosuke taniguchi and yoshinobu kano , `` legal yes / no question answering system using case - role analysis '' , tenth international workshop on juris - informatics ( jurisin ) , 2016 ( submission i d : * kis * )
|
this paper presents a study of employing ranking svm and convolutional neural network for two missions : legal information retrieval and question answering in the competition on legal information extraction / entailment . for the first task , our proposed model used a triple of features ( lsi , manhattan , jaccard ) , and is based on paragraph level instead of article level as in previous studies . in fact , each single - paragraph article corresponds to a particular paragraph in a huge multiple - paragraph article . for the legal question answering task , additional statistical features from information retrieval task integrated into convolutional neural network contribute to higher accuracy .
|
the resource sharing in shared access networks like cable internet based on hybrid fiber - coaxial ( hfc ) networks or passive optical networks ( pons ) is a key to achieving lower infrastructure cost and higher energy efficiency .the full sharing of the bandwidth available among subscribers in a shared access network , however , is hindered by the current practice of traffic control by internet service providers ( isps ) , which is illustrated in fig . [fig : isp_traffic_control ] ; due to the arrangement of traffic shapers ( i.e. , token bucket filters ( tbfs ) ) and a scheduler in the access switch , the capability of allocating available bandwidth by the scheduler is limited to the _ traffic already shaped _ per service contracts with subscribers . even though the allocation of excess bandwidth in a shared link has been discussed in the general context of quality of service ( qos ) control ( e.g. , ) , it is recently when the issue was studied in the specific context of isp traffic control in shared access .based on the isp traffic control schemes proposed in and , we have been studying the design of flexible yet practical isp service plans exploiting the excess bandwidth allocation in shared access networks under a hybrid isp traffic control architecture in order to gradually introduce the excess bandwidth allocation while providing backward compatibility with the existing traffic control infrastructure .to the best of our knowledge , our work in is the first effort to study the issue of enabling excess bandwidth allocation among the subscribers , together with its business aspect , in the context of isp traffic control in shared access . in this paper, we report the current status of our modeling of the hybrid isp traffic control schemes and service plans exploiting excess bandwidth in shared access networks with omnet++ and inet - hnrl based on the stacked virtual local area networks ( vlans ) of ieee standard 802.1q .in this section , we briefly review the hybrid isp traffic control schemes and service plans for shared access that we proposed in . fig . [fig : hybrid_traffic_control ] shows the proposed architecture for hybrid isp traffic control , where there coexist subscribers for the current flat - rate service plan and those for a new service plan fully sharing the bandwidth among them . for backward compatibility with the existing traffic control and pricing schemes ,the new service plan subscribers are grouped together and treated as one _ virtual _ subscriber under the flat - rate service plan ; at the same time , the traffic from each subscriber of the new service plan is individually controlled by an isp traffic control scheme enabling excess bandwidth allocation within the group .the migration toward fully - shared access will be completed when all the subscribers of the flat - rate service plan move to the new service plan exploiting excess bandwidth allocation .note that , for the new service plan to be acceptable , it is desirable that there should be no disadvantage in adopting the new service plan for both isp and its subscribers compared to the existing flat - rate service plan . in this regard ,we can derive requirements for the new service plan to meet in terms of parameters for existing flat - rate service plans , including monthly price , token generation rate , and token bucket size .interested readers are referred to for details .as discussed in , we have already implemented models of the shared access network shown in fig . [fig : isp_traffic_control ] based on vlan as part of inet - hnrl , because we want abstract models that can provide features common to specific systems ( e.g. , cable internet and ethernet pon ( epon ) ) , while being practical enough to be compatible with other components and systems of the whole network . in the vlan - based shared access models , we use a vlan identifier ( vid ) to identify each subscriber , which is similar to the service identifier ( sid ) in cable internet and the logical link identifier ( llid ) in epon . for the implementation of models for the hybrid isptraffic control shown in fig .[ fig : hybrid_traffic_control ] , we can think of two distinct approaches , i.e. , an integrated approach where we implement the whole scheduling as one system ( e.g. , based on the hierarchical token bucket ( htb ) scheduler ) and a modular approach where we integrate separate schedulers ( e.g. , a scheduler based on tbf shaping and a drr - based scheduler enabling excess bandwidth allocation ) into one . considering the ease of the management of two separate groups of subscribers and the upgradability of the component scheduler allocating excess bandwidth independently of the traditional one based on tbfs, we have chosen a modular approach and again based our implementation on vlan .unlike existing models based on a single vlan tag per frame , we need two different ways of identifying frames from the subscribers for the new hybrid traffic control scheme and service plan : as for the existing tbf - based traffic control scheme , the whole frames from those subscribers need to be identified and treated as a group ( i.e. , one virtual subscriber ) for traffic shaping and scheduling ; as for the new excess - bandwidth - allocating traffic control scheme , on the other hand , the frames from each subscriber need to be identified and treated as a separate flow . fortunately , this requirement of hierarchical identification of ethernet frames under the new hybrid traffic control scheme can be met by the technique of _ stacked vlans _ ( also called _ provider bridging _ and _ q - in - q _ ) , which is now part of ieee standard 802.1q .the change of ethernet frame formats related with the vlan stacking and two tag operations are shown in fig .[ fig : frame_formats ] . note that the tag protocol identifier ( tpid ) of the second service vlan ( s - vlan ) tag is set to a value of 0x88a8 , different from the value of 0x8100 for the first customer vlan ( c - vlan ) tag .[ fig : access_model ] shows stacked - vlan - based modeling of a shared access network with hybrid isp traffic control , while figs .[ fig : vlan_switch ] and [ fig : ethermac2 ] show the ethernet switch module for onus , olts , and access switches , and the ethernet mac module implementing hybrid isp traffic control , respectively ; as for the traffic control schemes enabling excess bandwidth allocation , there are implemented two queue types , i.e. , _csfqvlanqueue5 _ for the algorithm based on core - stateless fair queueing ( csfq ) and _ drrvlanqueue3 _ for the algorithm based on deficit round - robin ( drr ) . ) . ]first , the `` olt_c '' access switch carries out individual traffic control based on the customer vid ( c - vid ) of a frame with a single c - vlan tag , which is assigned to each subscriber , and sends resulting frames to the second access switch node `` olt '' . at the `` olt '', the c - vlan frames are grouped together with the second s - vlan tag ( i.e. , vlan stacking ) and go through another traffic control together with frames from other subscribers with normal ( i.e. , unstacked ) vlan tags .in this way , traffic for the subscribers of the new hybrid traffic control scheme and service plan goes through two stages of traffic control , i.e. , one at the `` olt_c '' exploiting excess bandwidth allocation and the other at the `` olt '' based on traditional tbf - based traffic shaping . in implementing models of the hybrid traffic control in shared access based on stacked vlans, we tried to meet the following major requirements : * _ backward compatibility _ with the existing vlan implementations in inet - hnrl , including * * _ ethernetframewithvlan _ message format * * _ macrelayunitnpwithvlan _ and _ vlantagger _ modules * _ expandability _ to stack more than two vlan tags consider the original definition of _ ethernetframewithvlan _ message shown in fig .[ fig : message_defs ] ( a ) .because the _ macrelayunitnpwithvlan _ switching module is based on the _ vid _ field of the _ ethernetframewithvlan _ message , which is directly accessible by the _ getvid ( ) _ member function , we had to keep these fields in the new definition of _ ethernetframewithvlan _ message .for stacking of vlan tags , on the other hand , we need to introduce _ innertags _ field based on the _ stack _ c++ type , which is shown in fig .[ fig : message_defs ] ( b ) and ignored by the existing modules based on the original definition of _ ethernetframewithvlan _ message , including _macrelayunitnpwithvlan _ module . in this way, we can meet both the requirements .( a ) + ( b ) note that in the current implementation of stacked vlans , broadcasting is not allowed across the hierarchies of stacked vlans . in the shared access network model shown in fig .[ fig : access_model ] , for example , broadcasting is possible among normal vlans or c - vlans within the same s - vlan .broadcasting over the hierarchies of stacked vlans requires the modification of the learning mechanism implemented in the current _ macrelayunitnpwithvlan _ module .in this paper we discuss the issues in current practice of isp traffic shaping and related flat - rate service plans in shared access networks and review alternative service plans based on new hybrid isp traffic control schemes exploiting excess bandwidth .we also report the current status of our modeling of the hybrid isp traffic control schemes and service plans with omnet++/inet - hnrl based on stacked vlans . in implementing models of the hybrid traffic control in shared access based on stacked vlans , we maintain backward compatibility with the existing modules for ethernet switching and vlan tagging andyet enable the support of stacking of multiple vlan tags by clever modification of the message definition for ethernet frame with vlan tags .this work was supported by xian jiaotong - liverpool university research development fund ( rdf ) under grant reference number rdf-14 - 01 - 25 .l. farmer and k. s. kim , `` cooperative isp traffic shaping schemes in broadband shared access networks , '' in _ proc . the 4th international workshop on fiber optics in access network ( foan 2013 ) _ , sep . 2013 , pp .2125 .a. varga , `` the omnet++ discrete event simulation system , '' in _ proc . the european simulation multiconference ( pesm2001 ) _ , prague , czech republic , jun .2001 , pp . 319324 .[ online ] .available : http://www.omnetpp.org/
|
the current practice of shaping subscriber traffic using a token bucket filter by internet service providers may result in a severe waste of network resources in shared access networks ; except for a short period of time proportional to the size of a token bucket , it can not allocate excess bandwidth among active subscribers even when there are only a few active subscribers . to better utilize the network resources in shared access networks , therefore , we recently proposed and analyzed the performance of access traffic control schemes , which can allocate excess bandwidth among active subscribers proportional to their token generation rates . also , to exploit the excess bandwidth allocation enabled by the proposed traffic control schemes , we have been studying flexible yet practical service plans under a hybrid traffic control architecture , which are attractive to both an internet service provider and its subscribers in terms of revenue and quality of service . in this paper we report the current status of our modeling of the hybrid traffic control schemes and service plans with omnet++/inet - hnrl based on ieee standard 802.1q stacked vlans . isp traffic control , excess bandwidth allocation , stacked vlans .
|
let be a gibbs probability measure on with dimension very big , that is , where is some -finite reference measure on .our purpose is to study the gibbs sampling a markov chain monte carlo method ( mcmc in short ) for approximating .in fact , even for the simplest case where , as the denominator contains an exponential number of terms and each of them may be very big or small for high dimension , it is very difficult to model .let be the regular conditional distribution of knowing under ; and ( product measure ) , where is the dirac measure at the point .we see that which is a one - dimensional measure , easy to be realized in practice .the idea of the gibbs sampling consists in approximating via iterations of the one - dimensional conditional distributions .it is described as follows .given a starting configuration , let be a non - homogeneous markov chain defined on some probability space , such that and given then for and the conditional law of is .in other words , the transition probability at step is : * . therefore , * . finally , the gibbs sampling is the time - homogeneous markov chain , whose transition probability is .this mcmc algorithm is known sometimes as _ gibbs sampler _ in the literature ( see winkler , chapters 5 and 6 ) .it is actively used in statistical physics , chemistry , biology and throughout the bayesian statistics ( a sentence taken from ) .it was used by zegarlinski as a tool for proving the logarithmic sobolev inequality for gibbs measures , see also the second named author for a continuous time mcmc .our purpose is two - fold : * the convergence rate of to ; * the concentration inequality for .question ( 1 ) is a classic subject .earlier works by meyn and tweedie and rosenthal are based on the harris ergodicity theorem ( minorization condition together with the drift condition in the non - compact case ) .quantitative estimates in the harris ergodic theorem are obtained more recently by rosenthal and hairer and mattingly .but as indicated by diaconis , khare and saloff - coste , theoretical results obtained from the harris theorem are very far ( even too far ) from the convergence rate of numerical simulations in high dimension ( e.g. , ) .that is why diaconis , khare and saloff - coste use new methods and tools ( orthogonal polynomials , stochastic monotonicity and coupling ) for obtaining sharp estimates of ( total variation norm ) for several special models in bayesian statistics , with replaced by , a space of two different components . for the question ( 1 ), our tool will be the dobrushin interdependence coefficients ( very natural and widely used in statistical physics ) , instead of the minorization condition in the harris theorem or the special tools in .our main idea consists in constructing an appropriate coupling well adapted to the dobrushin interdependence coefficients , close to that of marton . to the second question, we will apply the recent theory on transport inequalities ( see marton , ledoux , villani , gozlan and lonard and references therein ) , and our approach is inspired from marton and djellout , guillin and wu for dependent tensorization of transport inequalities . see for monte carlo algorithms and diverse applications , and for concentration inequalities of general mcmc under the positive curvature condition .this paper is organized as follows .the main results are stated in the next section , and we prove them in section [ sec3 ] .throughout the paper , is a polish space with the borel -field , and is a metric on such that is lower semi - continuous on ( so does not necessarily generate the topology of ) . on the product space we consider the -metric if is the discrete metric on , becomes the hamming distance on , a good metric for concentration in high dimension as shown by marton .let be the space of probability measures on and is some fixed point ) .given , the -wasserstein distance between is given by where the infimum is taken over all probability measures on such that its marginal distributions are , respectively , and ( _ coupling of and , say _ ) .when ( _ the discrete metric _ ) , it is well known that recall the kantorovich rubinstein duality relation let be the given regular conditional distribution of knowing ._ throughout the paper _ , _ we assume that , for all and , where is some fixed point of , and is lipschitzian from to . _define the matrix of the -dobrushin interdependence coefficients obviously . then the well - known dobrushin uniqueness condition ( see )is read as or by the triangular inequality for the metric , when are probability measures , the kullback information ( or relative entropy ) of with respect to is defined as we say that the probability measure satisfies the _ -transport - entropy inequality _ on with some constant , if to be short , we write for this relation .this inequality , related to the phenomenon of measure concentration , was introduced and studied by marton , developed subsequently by talagrand , bobkov and gtze , djellout , guillin and wu and amply explored by ledoux , villani and gozlan - lonard .let us mention the following bobkov gtze s criterion .[ bg ] a probability measure satisfies the -transport - entropy inequality on with constant , that is , , if and only if for any lipschitzian function , is -integrable and where . in that case , another necessary and sufficient condition for is the gaussian integrability of , see djellout , guillin and wu . for further results and recent progressessee gozlan and lonard .[ rem21 ] recall also that w.r.t .the discrete metric , any probability measure satisfies with the sharp constant ( the well known ckp inequality ) .for any function , let be the lipschitzian coefficient w.r.t .the coordinate .it is easy to see that [ convergencerate ] under the dobrushin uniqueness condition , we have : a. for any lipschitzian function on and two initial distributions on , where is a coupling of , that is , the law of is for .b. in particular for any initial distribution on , where is a coupling of . by part ( b ) above is the unique invariant measure of under the dobrushin uniqueness condition , and converges exponentially rapidly to in the metric , showing theoretically why the numerical simulations by the gibbs sampling are very rapid . let us compare theorem [ convergencerate ] with the known results in on the convergence rate of the gibbs sampling .at first the convergence rate in those known works is in the total variation norm , not in the metric .when is the discrete metric , we have by part ( b ) of theorem [ convergencerate ] next , let us explain once again why the minorization condition in the harris theorem does not yield accurate estimates in high dimension ( see diaconis _ et al . _ for similar discussions based on concrete examples ) .indeed assume that is finite , then under reasonable assumption on , there are constant and a probability measure such that ( i.e. , almost the best minorization that one can obtain in the dependent case ) .hence by the doeblin theorem ( the ancestor of the harris theorem ) , so one requires at least an exponential number of steps for the right - hand side becoming small .our estimate of the convergence rate is much better in high dimension , that is the good point of theorem [ convergencerate ] .the weak point of theorem [ convergencerate ] is that our result depends on the dobrushin uniqueness condition , even in low dimension .if is small , the results in are already good enough . particularly the estimates of diaconis , khare and saloff - coste for the special space of two different components in bayesian statistics are sharp .we should indicate that the dobrushin uniqueness condition is quite natural for the exponential convergence of to with the rate independent of as in this theorem , since the dobrushin uniqueness condition is well known to be sharp for the phase transition of mean field models .finally , our tool ( dobrushin s interdependence coefficients ) is completely different from those in the known works .as indicated by a referee , it would be very interesting to investigate the convergence rate problem under the more flexible dobrushin shlosman analyticity condition ( i.e. , box version of dobrushin uniqueness condition , reference ) , but in that case we feel that we should change the algorithm : instead of , one uses the conditional distribution of knowing where is a box containing . a much more classical topic is glauber dynamics associated with the gibbs measures in finite or infinite volume .we are content here to mention only zegarlinski , martinelli and olivieri , and the lecture notes of martinelli for a great number of references. the convergence rate estimate above will be our starting point for computing the mean , that is , to approximate by the empirical mean .[ concentrationinequalities ] assume and for some constant , ( recall that for the discrete metric . )then for any lipschitzian function on with , we have : a. b. furthermore if holds , \\[-8pt ] & & \quad\leq\exp\biggl\{-\frac { t^{2}(1 - 2r_{1})^{2}n}{2c_{1}\alpha^{2 } n } \biggr\}\qquad \forall t>0,n\geq1,\nonumber\end{aligned}\ ] ] where in conclusion under the conditions of this theorem , when , the empirical means will approximate to exponentially rapidly in probability with the speed , with the bias not greater than .the speed is the correct one , as will be shown in the remark below .we do not know whether the concentration inequality with the speed still holds under the more natural dobrushin s uniqueness condition .we know only that does not imply that is contracting in the metric , see the example in remark [ remar3.3 ] .consider where is -lipschitzian with ( the observable of this type is often used in statistical mechanics ) .since , the inequality ( [ thm2a ] ) implies for all , which is of speed .let us show that the concentration inequality ( [ thm2a ] ) is sharp .in fact in the free case , that is , does not depend upon and , and is the product measure . in this case , in other words is a sequence of independent and identically distributed ( i.i.d . in short ) random variables valued in , of common law .since in the free case , the concentration inequality ( [ thm2d ] ) is equivalent to the transport inequality for , by gozlan - lonard .that shows also the speed in theorem [ concentrationinequalities ] is the correct one .we explain now why we do not apply directly the nice concentration results of joulin and ollivier for general mcmc .in fact under the condition that , we can prove that ( by lemma [ 1-norm ] ) .in other words the ricci curvature in is bounded from below by unfortunately we can not show that the ricci curvature is positive in the case where . if is unbounded , the results of , theorems 4 and 5 , do not apply here , because their granularity constant explodes .assume now that is bounded .if we apply the results ( , theorems 4 and 5 ) and their notations , their coarse diffusion constant is of order ; and their local dimension is of order ( by lemma [ transine ] below ) , and their granularity constant is of order . setting theorem 4 in says that if , for all and .so for small deviation , their result yields the same order gaussian concentration inequality , but for large deviation , their estimate is only exponential , not gaussian as one may expect in this bounded case . in ,theorem 5 , they get a same type gaussian - exponential concentration inequality with depending upon the starting point . anyway the key lemmas in this paper are necessary for applying the results of to this particular model . for the gibbs measure on ,marton established the talagrand transport inequality on equipped with the euclidean metric , under the dobrushin shlosman analyticity type condition .the second named author proved for on equipped with the metric , under . but those transport inequalities are for the equilibrium distribution , not for the gibbs sampling which is a markov chain with as invariant measure .however our coupling is very close to that of k. marton .for -mixing sequence of dependent random variables , rio and samson established accurate concentration inequalities , see also djellout , guillin and wu and the recent works by paulin and wintenberger for generalizations and improvements . in the markov chain case -mixingmeans the doeblin uniform ergodicity .if one applies the results in to the gibbs sampling , one obtains the concentration inequalities with the speed , where when holds with the discrete metric , is actually finite but it is of order by theorem [ convergencerate ] ( and its remarks ) .the concentration inequalities so obtained from are of speed , very far from the correct speed .when depends on a very small number of variables , since does not reflect the nature of such observable , one can imagine that our concentration inequalities do not yield the correct speed .in fact in the free case and for , the correct speed must be , not .for this type of observable , one may use the metric which reflects much better the number of variables in such observable .the ideas in marton should be helpful .that will be another history .given any two initial distributions and on , we begin by constructing our coupled non - homogeneous markov chain , which is quite close to the coupling by marton .let be a coupling of . and given then and where is an optimal coupling of and such that define the partial order on by if and only if .then , by ( [ dobrushin ] ) , we have for , \cr \vdots \cr \vdots \cr \mathbb{e } \bigl[d\bigl(x_{kn+i}^{n},y_{kn+i}^{n } \bigr)|x_{kn+i-1 } , y_{kn+i-1}\bigr ] } \leq b_{i } \pmatrix { d \bigl(x_{kn+i-1}^{1},y_{kn+i-1}^{1}\bigr ) \cr \vdots\cr \vdots \cr d\bigl(x_{kn+i-1}^{n},y_{kn+i-1}^{n } \bigr)},\ ] ] where therefore by iterations , we have let then we have the following lemma . [ infty - norm ] under , .we use the probabilistic method . under wecan construct markov chain , taking values in where is an extra point representing the cemetery , and write as follows : where the transition matrix from to is , more precisely for , here if and otherwise ( kronecker s symbol ) .then for any , when , we have .therefore , and thus so . by ( [ matrix ] ) above , markov property and iterations , let , then by lemma [ infty - norm ] now the results of this theorem follow quite easily from this inequality .in fact , \(a ) for any lipschitzian function , & \le & r^k \max_{1\le i\le n}{\mathbb{e}}d\bigl(z_0^i(1 ) , z_0^i(2)\bigr ) \sum_{i=1}^n \delta_i(f),\end{aligned}\ ] ] where the last inequality follows by ( [ maxine ] ) . that is ( [ a1 ] ) .\(b ) now for , , as , we have the desired result .we begin with [ 1-norm ] if ( i.e. , ) , then for the matrix given in ( [ q ] ) , in particular the last conclusion ( [ q2 ] ) follows from ( [ q1 ] ) and ( [ matrix ] ) .we show now ( [ q1 ] ) . by the definition of , it is not difficult to verify for , here we make the convention .this can be obtained again by the markov chain valued in constructed in lemma [ infty - norm ] .since for all , : that is the first line in the expression of .now for , as and if and , then and so this implies the expression of above by induction .thus for , where the last inequality holds because for fixed and , so the proof of ( [ q1 ] ) is completed .[ remar3.3 ] let be the gaussian distribution on with mean and the covariance matrix where .we have ( i.e. , and both hold ) ; and under , and are i.i.d .gaussian random variables with mean and variance . hence , and since , , = \bigl(r+r^2\bigr ) \bigl|x_2-x'_2\bigr| .\ ] ] thus , and the ricci curvature is positive if and only if .in other words , though we have missed many terms in the proof above , the estimate of can not be qualitatively improved .[ transine ] assume and , then the proof is similar to the one used by djellout , guillin and wu , theorem 2.5 .first for simplicity denote by and note that for , thus for any probability measure on such that , let }) ] , where }=(x^{1},\ldots , x^{i-1}) ] the law of for , all under law .define }) ] and } ] by induction as follows ( the marton coupling ) . at firstthe law of is the optimal coupling of and .assume that for some },x^{[1,i-1]})= ( y^{[1,i-1]},x^{[1,i-1]}) ] is the optimal coupling of }) ] , that is , }=y^{[1,i-1]},x^{[1,i-1]}=x^{[1,i-1 ] } \bigr ) = w_{1,d}\bigl(q_{i}\bigl(\cdot|y^{[1,i-1 ] } \bigr),p_{i}\bigl(\cdot|x^{[1,i-1]}\bigr)\bigr).\ ] ] obviously , },x^{[1,n]} ] , summing on and using jessen s inequality, we have })}{n}}+ \frac{\sum_{i=1}^{n}\sum_{j=1}^{i-1}c_{ij}\mathbb { e}d(y^{j},x^{j})}{n } \\ & = & \sqrt{\frac{2c_{1}h(q|p)}{n}}+\frac{\sum_{j=1}^{n-1}\mathbb { e}d(y^{j},x^{j})\sum_{i = j+1}^{n}c_{ij}}{n } \\ & \leq&\sqrt{\frac{2c_{1}h(q|p)}{n}}+\frac{r_1 \sum_{j=1}^{n}\mathbb{e}d(y^{j},x^{j})}{n}\end{aligned}\ ] ] the above inequality gives us that is , .theorem [ concentrationinequalities ] is based on the following dependent tensorization result of djellout , guillin and wu .[ lemdgw ] let be a probability measure on the product space .for any }:=(x_{1},\ldots , x_{k}) ] denote the regular conditional law of given } ] be the distribution of for .assume that : 1 .for some metric on , for all }\in e^{k-1}$ ] ; 2 .there is some constant such that for all real bounded lipschitzian function with , for all , }=x_{[1,k ] } \bigr)-\mathbb{e}_{\mathbb{p}}\bigl(f(x_{k+1},\ldots , x_{n } ) & & \quad \leq sd(x_{k},y_{k}).\end{aligned}\ ] ] then for all function on satisfying , we have equivalently , on with we are now ready to prove theorem [ concentrationinequalities ] .proof of theorem [ concentrationinequalities ] we will apply lemma [ lemdgw ] with being , and be the law of on . by ( [ matrixineq ] ) , lemma [ 1-norm ] and the condition that , the constant in lemma [ lemdgw ] is bounded from above by take , then the lipschitzian norm of w.r.t .the ( for ) is not greater than .thus by lemmas [ lemdgw ] and [ transine ] , so , by the classic approach , firstly using chebyshev s inequality , and then optimizing over , we obtain the desired part in theorem [ concentrationinequalities ] . furthermore by theorem [ convergencerate ] , we have thus , we obtain part in theorem [ concentrationinequalities ] from its part ( a ) .supported in part by thousand talents program of the chinese academy of sciences and le projet anr evol .we are grateful to the two referees for their suggestions and references , which improve sensitively the presentation of the paper .
|
the objective of this paper is to study the gibbs sampling for computing the mean of observable in very high dimension a powerful markov chain monte carlo method . under the dobrushin s uniqueness condition , we establish some explicit and sharp estimate of the exponential convergence rate and prove some gaussian concentration inequalities for the empirical mean .
|
a crucial prerequisite for the prediction of transport parameters of porous media is a suitable characterization of the microstructure . despite a long history of scientific studythe microstructure of porous media continues to be investigated in many areas of fundamental and applied research ranging from geophysics , hydrology , petrophysics and civil engineering to the materials science of composites .my primary objective in this article is to review briefly the application of local porosity theory , introduced in , as a method that provides a scale dependent geometric characterization of porous or heterogeneous media .a functional theorem of hadwiger emphasizes the importance of four set - theoretic functionals for the geometric characterization of porous media .in contrast herewith local porosity theory has emphasized geometric observables , that are not covered by hadwigers theorem .other theories have stressed the importance of correlation functions or contact distributions for characterization purposes .recently advances in computer and imaging technology have made threedimensional microtomographic images more readily available .exact microscopic solutions are thereby becoming possible and have recently been calculated .moreover , the availability of threedimensional microstructures allows to test approximate theories and geometric models and to distinguish them quantitatively .distinguishing porous microstructures in a quantitative fashion is important for reliable predictions and it requires apt geometric observables . examples of important geometric observables are porosity and specific internal surface area .it is clear however , that porosity and specific internal surface area alone are not sufficient to distinguish the infinite variety of porous microstructures .geometrical models for porous media may be roughly subdivided into the classical capillary tube and slit models , grain models , network models , percolation models , fractal models , stochastic reconstruction models and diagenetic models .little attention is usually paid to match the geometric characteristics of a model geometry to those of the experimental sample , as witnessed by the undiminished popularity of capillary tube models .usually the matching of geometric observables is limited to the porosity alone .recently the idea of stochastic reconstruction models has found renewed interest . in stochastic reconstruction modelsone tries to match not only the porosity but also other geometric quantities such as specific internal surface , correlation functions , or linear and spherical contact distributions .as the number of matched quantities increases one expects that also the model approximates better the given sample .matched models for sedimentary rocks have recently been subjected to a quantitative comparison with the experimentally obtained microstructures .a two - component porous sample is defined as the union of two closed subsets and where denotes the pore space ( or component 1 in a heterogeneous medium ) and denotes the matrix space ( or component 2 ) . for simplicityonly two - component media will be considered throughout this paper , but most concepts can be generalized to media with an arbitrary finite number of components .a particular pore space configuration may be described using the characteristic ( or indicator ) function of a set .it is defined for arbitrary sets as the geometrical problems in porous media arise because in practice the pore space configuration is usually not known in detail . on the other hand the solution of a physical boundary value problem would require detailed knowledge of the internal boundary , and hence of . while it is becoming feasible to digitize samples of several mm with a resolution of a few m this is not possible for larger samples .for this reason the true pore space is often replaced by a geometric model .one then solves the problem for the model geometry and hopes that its solution obeys in some sense .such an approach requires quantitative methods for the comparison of and the model .this in turn raises the problem of finding generally applicable quantitative geometric characterization methods that allow to evaluate the accuracy of geometric models for porous microstructues .the problem of quantitative geometric characterization arises also when one asks which geometrical characteristics of the microsctructure have the greatest influence on the properties of the solution of a given boundary value problem .some authors introduce more than one geometrical model for one and the same microstructure when calculating different physical properties ( e.g. diffusion and conduction ) .it should be clear that such models make it difficult to extract reliable physical or geometrical information .a general geometric characterization of stochastic media should provide macroscopic geometric observables that allow to distinguish media with different microstructures quantitatively .in general , a stochastic medium is defined as a probability distribution on a space of geometries or configurations .probability distributions and expectation values of geometric observables are candidates for a general geometric characterization .a general geometric characterization should fulfill four criteria to be useful in applications .these four criteria were advanced in .first , it must be well defined .this obvious requirement is sometimes violated .the so called `` pore size distributions '' measured in mercury porosimetry are not geometrical observables in the sense that they can not be determined from knowledge of the geometry alone .instead they are capillary pressure curves whose calculation involves physical quantities such as surface tension , viscosity or flooding history .second , the geometric characterization should be directly accessible in experiments .the experiments should be independent of the quantities to be predicted .thirdly , the numerical implementation should not require excessive amounts of data .this means that the amount of data should be manageable by contemporary data processing technology .finally , a useful geometric characterization should be helpful in the exact or approximate theoretical calculations .well defined geometric observables are the basis for the geometric characterization of porous media .a perennial problem in all applications is to identify those macroscopic geometric observables that are relevant for distinguishing between classes of microstructures .one is interested in those properties of the microstructure that influence the macroscopic physical behaviour . in generalthis depends on the details of the physical problem , but some general properties of the microstructure such as volume fraction or porosity are known to be relevant in many situations .hadwigers theorem is an example of a mathematical result that helps to identify an important class of such general geometric properties of porous media .it will be seen later , however , that there exist important geometric properties that are not members of this class .a geometric observable is a mapping ( functional ) that assigns to each admissible pore space a real number that can be calculated from without solving a physical boundary value problem .a functional whose evaluation requires the solution of a physical boundary value problem will be called a physical observable . before discussing examples for geometric observablesit is necessary to specify the admissible geometries .the set of admissible is defined as the set of all finite unions of compact convex sets . because is closed under unions and intersections it is called the convex ring .the choice of is convenient for applications because digitized porous media can be considered as elements from and because continuous observables defined for convex compact sets can be continued to all of .the set of all compact and convex subsets of is denoted as . for subsequent discussions the minkowski addition of two sets defined as multiplication of with a scalar is defined by for .examples of geometric observables are the volume of or the surface area of the internal . of a set defined as the difference between the closure and the interior of where the closure is the intersection of all closed sets containing and the interior is the union of all open sets contained in .] let denote the -dimensional lebesgue volume of the compact convex set .the volume is hence a functional on .an example of a compact convex set is the unit ball centered at the origin whose volume is other functionals on can be constructed from the volume by virtue of the following fact . for every compact convex and every there are numbers depending only on such that is a polynomial in .this result is known as steiners formula .the numbers define functionals on similar to the volume .the quantities are called quermassintegrals . from onesees that and from that .hence may be viewed as half the surface area .the functional is related to the mean width defined as the mean value of the distance between a pair of parallel support planes of .the relation is which reduces to for .finally the functional is evaluated from by dividing with and taking the limit .it follows that for all .one extends to all of by defining .the geometric observable is called euler characteristic .the geometric observables have several important properties .they are euclidean invariant ( i.e. invariant under rigid motions ) , additive and monotone .let denote the group of translations with vector addition as group operation and let be the matrix group of rotations in dimensions .the semidirect product is the euclidean group of rigid motions in .it is defined as the set of pairs with and and group operation an observable is called euclidean invariant or invariant under rigid motions if holds for all and all . here denotes the rotation of and its translation .a geometric observable is called additive if holds for all with .finally a functional is called monotone if for with follows .the special importance of the functionals arises from the following theorem of hadwiger .a functional is euclidean invariant , additive and monotone if and only if it is a linear combination with nonnegative constants .the condition of monotonicity can be replaced with continuity at the expense of allowing also negative , and the theorem remains valid .if is continuous on , additive and euclidean invariant it can be additively extended to the convex ring .the additive extension is unique and given by the inclusion - exclusion formula where denotes the family of nonempty subsets of and is the number of elements of . in particular , the functionals have a unique additive extension to the convex ring , which is again be denoted by . for a threedimensional porous sample with the extended functionals lead to two frequently used geometric observables .the first is the porosity of a porous sample defined as and the second its specific internal surface area which may be defined in view of as the two remaining observables and have received less attention in the porous media literature .the euler characteristic on coincides with the identically named topological invariant . for and one has where is the number of connectedness components of , and denotes the number of holes ( i.e. bounded connectedness components of the complement ) . for theoretical purposes the pore space is frequently viewed as a random set . in practical applications the pore spaceis usually discretized because of measurement limitations and finite resolution .for the purpose of discussion the set is a rectangular parallelepiped whose sidelengths are and in units of the lattice constant ( resolution ) of a simple cubic lattice .the position vectors with integers are used to label the lattice points , and is a shorthand notation for .let denote a cubic volume element ( voxel ) centered at the lattice site .then the discretized sample may be represented as .the discretized pore space , defined as is an approximation to the true pore space .for simplicity it will be assumed that the discretization does not introduce errors , i.e. that , and that each voxel is either fully pore or fully matrix .this assumption may be relaxed to allow voxel attributes such as internal surface or other quermassintegral densities .the discretization into voxels reflects the limitations arising from the experimental resolution of the porous structure .a discretized pore space for a bounded sample belongs to the convex ring if the voxels are convex and compact .hence , for a simple cubic discretization the pore space belongs to the convex ring . a configuration ( or microstructure ) of a -component medium may be represented in the simplest case by a sequence where runs through the lattice points and .this representation corresponds to the simplest discretization in which there are only two states for each voxel indicating whether it belongs to pore space or not . in general a voxel could be characterized by more states reflecting the microsctructure within the region . in the simplest casethere is a one - to - one correspondence between and given by .geometric observables then correspond to functions . as a convenient theoretical idealization it is frequently assumed that porous media are random realizations drawn from an underlying statistical ensemblea discretized stochastic porous medium is defined through the discrete probability density where in the simplest case .it should be emphasized that the probability density is mainly of theoretical interest . in practice usually not known .an infinitely extended medium or microstructure is called stationary or statistically homogeneous if is invariant under spatial translations .it is called isotropic if is invariant under rotations .a stochastic medium was defined through its probability distribution . in practice be even less accessible than the microstructure itself .partial information about can be obtained by measuring or calculating expectation values of a geometric observable .these are defined as where the summations indicate a summation over all configurations .consider for example the porosity defined in .for a stochastic medium becomes a random variable .its expectation is if the medium is statistically homogeneous then independent of .it happens frequently that one is given only a single sample , not an ensemble of samples .it is then necessary to invoke an ergodic hypothesis that allows to equate spatial averages with ensemble averages .the porosity is the first member in a hierarchy of moment functions .the -th order moment function is defined generally as for . where for are the quermassintegral densities for the voxel at site .] for stationary media where the function depends only on variables .another frequently used expectation value is the correlation function which is related to . for a homogeneous medium it is defined as where is an arbitrary reference point , and .if the medium is isotropic then .note that is normalized such that and .the hierarchy of moment functions , similar to , is mainly of theoretical interest . for a homogeneous medium a function of variables . to specify becomes impractical as increases .if only points are required along each coordinate axis then giving would require numbers . for implies that already at it becomes economical to specify the microstructure directly rather than incompletely through moment or correlation functions . an interesting geometric characteristic introduced and discussed in the field of stochastic geometry are contact distributions .certain special cases of contact distributions have appeared also in the porous media literature .let be a compact test set containing the origin .then the contact distribution is defined as the conditional probability if one defines the random variable then . for the unit ball in three dimensions called spherical contact distribution .the quantity is then the distribution function of the random distance from a randomly chosen point in to its nearest neighbour in .the probability density was discussed in as a well defined alternative to the frequently used pore size distrubution from mercury porosimetry . for an oriented unit interval where is the unit vector one obtains the linear contact distribution . the linear contact distribution written as sometimes called lineal path function .it is related to the chord length distribution defined as the probability that an interval in the intersection of with a straight line containing has length smaller than .the idea of local porosity distributions is to measure geometric observables inside compact convex subsets , and to collect the results into empirical histograms .let denote a cube of side length centered at the lattice vector .the set is called a measurement cell .a geometric observable , when measured inside a measurement cell , is denoted as and called a local observable .an example are local hadwiger functional densities with coefficients as in hadwigers theorem . herethe local quermassintegrals are defined using as for . in the followingmainly the special case will be of interest . for the local porosityis defined by setting , local densities of surface area , mean curvature and euler characteristic may be defined analogously .the local porosity distribution , defined as gives the probability density to find a local porosity in the measurement cell . here denotes the dirac -distribution .the support of is the unit interval . for noncubic measurement cells one defines analogously where is the local observable in cell .the concept of local porosity distributions was introduced in and has been generalized in two directions .firstly by admitting more than one measurement cell , and secondly by admitting more than one geometric observable . the general -cell distribution function is defined as for general measurement cells and observables .the -cell distribution is the probability density to find the values of the local observable in cell and in cell and so on until of local observable in .definition is a broad generalization of .this generalization is not purely academic , but was motivated by problems of fluid flow in porous media where not only but also becomes important .local quermassintegrals , defined in , and their linear combinations ( hadwiger functionals ) furnish important examples for local observables in , and they have recently been measured on real sandstone samples .the general -cell distribution in is very general indeed .it even contains from as the special case and with .more precisely one has because in that case if and for . in this wayit is seen that the very definition of a stochastic geometry is related to local porosity distributions ( or more generally local geometry distributions ) . as a consequencethe general -cell distribution is again mainly of theoretical interest , and usually unavailable for practical computations .expectation values with respect to have generalizations to averages with respect to .averaging with respect to will be denoted by an overline . in the special case and with onefinds thereby identifying the moment functions of order as averages with respect to an -cell distribution . for practical applications the -cell local porosity distributions and their analogues for other quermassintegralsare of greatest interest . for a homogeneous mediumthe local porosity distribution obeys for all lattice vectors , i.e. it is independent of the placement of the measurement cell .a disordered medium with substitutional disorder may be viewed as a stochastic geometry obtained by placing random elements at the cells or sites of a fixed regular substitution lattice . for a substitutionally disordered mediumthe local porosity distribution is a periodic function of whose period is the lattice constant of the substitution lattice. for stereological issues in the measurement of from thin sections see .averages with respect to are denoted by an overline . for a homogeneous medium the average local porosityis found as independent of and .the variance of local porosities for a homogeneous medium defined in the first equality is related to the correlation function as given in the second equality .the skewness of the local porosity distribution is defined as the average the limits and of small and large measurement cells are of special interest . in the first case one reaches the limiting resolution at and finds for a homogeneous medium the limit is more intricate because it requires also the limit . for a homogeneous mediumshows for and this suggests for macroscopically heterogeneous media , however , the limiting distribution may deviate from this result .if holds then in both limits the geometrical information contained in reduces to the single number . if and hold there exists a special length scale defined as at which the -components at and vanish .the length is a measure for the size of pores .the ensemble picture underlying the definition of a stochastic medium is an idealization . in practice oneis given only a single realization and has to resort to an ergodic hypothesis for obtaining an estimate of the local porosity distributions .the local porosity distribution may then be estimated by where is the number of placements of the measurement cell .ideally the measurement cells should be far apart or at least nonoverlapping , but in practice this restriction can not be observed because the samples are not large enough .the use of instead of can lead to deviations due to violations of the ergodic hypothesis or simply due to oversampling the central regions of .transport and propagation in porous media are controlled by the connectivity of the pore space .local percolation probabilities characterize the connectivity .their calculation requires a threedimensional pore space representation , and early results were restricted to samples reconstructed laboriously from sequential thin sectioning in this section a relationship between the euler characteristic and the local percolation probabililties is established for the first time .consider the functional defined by where are two compact convex sets with and , and `` in '' means that there is a path connecting and that lies completely in . in the examples below the sets and correspond to opposite faces of the sample , but in general other choices are allowed .analogous to , which is defined for the whole sample , one defines for a measurement cell where and denote those two faces of that are normal to the direction .similarly denote the faces of normal to the - and -directions .two additional percolation observables and are introduced by indicates that the cell is percolating in all three directions while indicates percolation in - or - or -direction .the local percolation probabilities are defined as where the local percolation probability gives the fraction of measurement cells of sidelength with local porosity that are percolating in the `` ''-direction .the total fraction of cells percolating along the `` ''-direction is then obtained by integration this geometric observable is a quantitative measure for the number of elements that have to be percolating if the pore space geometry is approximated by a substitutionally disordered lattice or network model .note that neither nor are additive functionals , and hence local percolation probabilities are not covered by hadwigers theorem .it is interesting that there is a relation between the local percolation probabilities and the local euler characteristic .the relation arises from the observation that the voxels are closed , convex sets , and hence for any two voxels the euler characteristic of their intersection indicates whether two voxels are nearest neighbours . measurement cell contains voxels .it is then possible to construct a -matrix with matrix elements where and the sets and are two opposite faces of the measurement cell .the rows in the matrix correspond to voxels while the columns correspond to voxel pairs .define the matrix where is the transpose of .the diagonal elements give the number of voxels to which the voxel is connected .a matrix element differs from zero if and only if and are connected .hence the matrix reflects the local connectedness of the pore space around a single voxel .sufficiently high powers of provide information about the global connectedness of .one finds where is the matrix element in the upper right hand corner and is arbitrary subject to the condition .the set can always be decomposed uniquely into pairwise disjoint connectedness components ( clusters ) whose number is given by the rank of .hence provides an indirect connection between the local euler characteristic and the local percolation probabilities mediated by the matrix . )the theoretical concepts for the geometric characterization of porous media discussed here are also useful in effective medium calculations of transport parameters such as conductivity or permeability .the resulting parameterfree predictions agree well with the exact result .the purpose of this section is to show that the previous theoretical framework can be used directly for predicting transport in porous media quantitatively and without free macroscopic fit parameters .the theory presented above allows a quantitative micro - macro transition in porous media and opens the possibility to determine the elusive `` representative elementary volume '' needed for macroscopic theories in a quantitative and property specific manner . in reference a fully threedimensional experimental sample of fontainebleau sandstone was compared with three geometric models , some of which had not only the same porosity and specific internal surface area but also the same correlation function .a large number of the geometric quantities discussed above was calculated in . for a discussion on the influence contact of distrtributionssee . the total fraction of percolating cells , defined in eqabove , was measured in for fontainebleau and some of its models .figure [ fig ] shows the total fraction of percolating cells as a function of length scale ( side length of measurement cells ) .it turns out that the quantity displayed in figure [ fig ] correlates very well with transport properties such as the hydraulic permeability .recently transport properties such as the permesbilities and formation factors of the fontainebleau sandstone and its geometries were calculated numerically exactly by solving the appropriate microscopic equations of motion on the computer .some of the results are summarized in table [ tab : trans ] below ..physical transport properties of fontainebleau sandstone and three geometric models for it ( see ) . is the conductivity in the direction in units of , where is the conductivity of thematerial filling the pore space . is the permeability in the direction in md . [ cols="<,<,<,<,<",options="header " , ] one sees from table [ tab : trans ] that while ex and dm are very similar in their permeabilities and formation factors the samples ex and gf have significantly lower values with gf being somewhat higher than sa .the same relationship is observed in figure [ fig ] for the percolation properties .these results show that the purely geometrical local percolation probabilities correlate surprisingly well with hydraulic permeability and electrical conductivity that determine physical transport .acknowledgement : many of the results discussed in this paper were obtained in earlier cooperations with b. biswal , c. manwart , j. widjajakusuma , p.e .ren , s. bakke , and j. ohser .i am grateful to all of them , and to the deutsche forschungsgemeinschaft as well as statoil a / s norge for financial support .stell , g. ( 1985 ) .ayer - montroll equations ( and some variants ) through history for fun and profit . in shlesinger , m. and weiss , g. , editors , _ the wonderful world of stochastics _, page 127 , amsterdam .
|
the paper discusses local porosity theory and its relation with other geometric characterization methods for porous media such as correlation functions and contact distributions . special emphasis is placed on the charcterization of geometric observables through hadwigers theorem in stochastic geometry . the four basic minkowski functionals are introduced into local porosity theory , and for the first time a relationship is established between the euler characteristic and the local percolation probabilities . local porosity distributions and local percolation probabilities provide a scale dependent characterization of the microstructure of porous media that can be used in an effective medium approach to predict transport . * review on scale dependent characterization of the microstructure of porous media * + r. hilfer + _ ica-1 , universitt stuttgart , 70569 stuttgart , germany + institut fr physik , universitt mainz , 55099 mainz , germany _
|
at the beginning of the decade , i started to hear about low - cost 3d printers , and i decided that i wanted one .as the end of the 2012 - 2013 fiscal year approached , i asked my dean to purchase a makerbot replicator 2 for my department , and he agreed .soon , we had a new 3d printer set up in my office , and i had no idea what to do with it . with two reu studentsthat summer , we started asking the question , `` what can a mathematician do with a 3d printer ? '' the first thing to do was to `` print '' something .the makerbot came with several stereolithography ( stl ) files , which are large files that describe the object as a set of oriented triangles .stl files are processed by software known as a `` slicer '' , for reasons that will be obvious in a moment , and the results of the slicer direct the printer s actions . following these directions, a 3d printer creates an object by melting filament ( a spaghetti - like spool of plastic , usually a type known as polylactic acid , or pla ) at temperatures over 200 degrees celsius , and then extruding the hot plastic through a nozzle onto a build plate , layer - by - layer , or slice - by - slice .mathematically , working in three dimensions , the printer starts by laying down plastic in the plane , at various points , following paths created by the slicer .then the build plate moves down vertically a very small amount so that more plastic can be extruded in the plane .( the nozzle moves in the and directions , but not the direction . ) the material for this second slice usually is placed on top of the material in the plane .the printer continues to , , etc . , and there are a variety of rules to take into account curved surfaces , overhangs , and other aspects of a three - dimensional object , or else all we could ever print would be rectangular prisms and pyramids .one of our first prints can be found in figure [ cube ] .there are a variety of ways to create stl files .for very simple designs , stl files can be written in a text editor , where the vertices and outer normal vector of each triangle are listed , such as : .... facet normal 0.0 1.0 0.0 outer loop vertex 0.0 40.0 0.0 vertex 40.0 40.0 0.0 vertex0.0 40.0 40.0 endloop endfacet .... however , nearly all stl files are created with a software program of some sort , and saved in relatively efficient binary files .these files can be created using 3d analogues of microsoft paint available for free online such as tinkercad ( https://www.tinkercad.com/ ) , open source programs like openscad ( www.openscad.org ) , and proprietary programs like _ mathematica_. influenced by segerman s paper , as well as my affinity for mathematical analysis ( as opposed to geometry ) , my students and i have primarily used _mathematica _ to design objects for 3d printing . some of our creations from the past two years can be seen in figures [ samples ] and [ victors ] .there are other mathematicians who have designed and printed fantastic objects with 3d printers in the past four years .laura taalman spent a year doing daily mathematically - inspired 3d - printing projects that she documented on her blog ( http://makerhome.blogspot.com/ ) .george hart has created 3d printed fractals ( https://www.simonsfoundation.org/multimedia/3-d-printing-of-mathematical-models/ ) .links to many other projects can be found on my web site ( https://sites.google.com/site/aboufadelreu/profile/3d-printing ) .most of these projects are based on mathematical topics like knot theory , polyhedra , and minimal surfaces that have geometry as a foundation . however , i have always been more of an analyst , thinking in terms of functions defined on a domain , so my 3d printing projects have had more of a flavor of computing a function that represents a height above points on a compact set in the -plane .the maa pendant in figure 2 is a good example of this approach , with the domain being a circle of the form , and the function having a value of 0 where the hole for the clip is , 2 ( millimeters , which is the standard unit in 3d printing design ) for the parts of the outer parts of the pendant , and then something significantly more elaborate for the combination of the initials and the icosohedron .the first major project with my reu students was to use two photographs of the same object ( specifically , a hand ) to create a 3d print of the object , as seen in figure [ handprint ] .the idea was that the stereo effect of two photographs , like the way our pair of eyes works , allows us to deduce the distance of various points on the object to the cameras , and from there a depth function can be defined .after we completed this project , i started to wonder to what extent this could be done with just one photograph .this is not a new question ( for example , there is the new `` 123d catch '' app from autodesk , see http://www.123dapp.com/catch ) .however , i wanted to figure out how to do this with my methods : create and generate a 3d rendering of a function in _ mathematica _ that represents height or depth , export the rendering as an stl file , then slice and print .of course , the hard part is the creation of the function .and my naive use of grayscale , which i will describe in the next section , was nt going to get me very far .one of the tricks to create from an image like a logo is to base the values of the function on the shades in a grayscale image .grayscale describes intensity ( or , as we will discover below , _ luminance _ ) , with brighter shades having larger grayscale values .there are two equivalent versions of grayscale : a scale of 0 ( black ) to 1 ( white ) , and a scale of 0 to 255 ( which comes from using bit strings ). a jpg image can be imported into _mathematica _ and converted to 0 - 1 grayscale , represented in a large matrix , and then this matrix , or a scalar multiple , can be used as a height function defined discretely in a table .the ` listplot3d ` command in _ mathematica _ can then be used to nicely render the function .we can do more by transforming the matrix values with some sort of filter , such as re - assigning all values greater than 0.5 with a height of 10 , and all values less than 0.5 with a height of 5 , using if - then - else commands .an example of this results of this procedure can be seen in figure [ torus ] . without thinking things through, i thought i would use this approach on my son s high school portrait .i will spare you the picture of the rendering of this function let s just say that the nose is not necessarily the lightest part of a portrait , the ears are not necessarily the darkest , and hair color plays extra havoc on an attempted design .this led to the question : how does the conversion from color to grayscale work ? and that led to the idea of _photographers , painters , and other artists understand the intensity of a color is measured by its luminance .the luminance of various colors can be seen in the color wheel in figure [ color ] , and when these colors are projected into grayscale space , luminance becomes grayscale .there is an excellent video by neuroscientist margaret livingstone on this topic .now , it would be hard to find a portrait with a lime green nose and dark blue ears .but in the video , livingstone mentioned a group of early 20th century artists that created the type of portraits that i could work with ._ les fauves _ , or the `` wild beasts '' was the name given to a set of young painters , primarily in paris , in the early twentieth century .they were known for their wild and unexpected use of color . as described by ferrier ,`` the fauves explored the spectrum ; for them , colors were not only mere stimuli on the retina but could also express feelings . '' the most famous fauvist was henri matisse , and other well - known members of the group were henri manquin , albert marquet , georges braque , and andr derain .artists have long recognized that color and luminance play different roles in visual perception .one of the goals of fauvism ( 1905 - 1907 ) was , according to douma , `` to give color greater emotional and expressive power . '' douma also observes that `` neuroscientifically , matisse s paintings work like a black and white photograph . '' and that is exactly what i needed to 3d print a portrait .the portrait that i focused on for this project is _ genie mask _ , seen in figure [ genie ] , by andr derain . clearly , as with most fauvist paintings , we do not have realistic colors being used for this human face .but , the luminance of the various colors in this painting line up with pretty well with the distance from the virtual `` lens '' that is viewing the portrait .this is apparent from the grayscale version of _ genie mask _ , also in figure [ genie ] .so , i was back in business , and i expected that it would not take long to find a jpg of this portrait that i could use , modify my _ mathematica _ file to import the image , create and render it as a 3d image , export the result as an stl file , and 3d print the result .for the first two steps , i was correct , but the rest of the steps required extensive computations , and actually my computer ran out of memory .i needed a mathematical tool well - known to me to finish the project .i ve been working with wavelets since the late 1990 s . inspired by the adoption of wavelets by the u.s .federal bureau of investigation ( the fbi ) for compression of fingerprint images , i learned the basics using publications written by strang , mulcahy , and eventually daubechies .after teaching introductory wavelets materials to undergraduates , steve schlicker and i co - wrote our own introduction to wavelets for undergraduates , and i began leading undergraduate research projects that applied wavelets in a variety of ways ( see https://sites.google.com/site/aboufadelreu/ ) .so what are these wavelets ? at their simplest ,a wavelet family is a set of linear independent functions that can be used for analysis , in the sense of taking other functions and writing them as a linear combination of wavelets functions . or , more accurately , _ approximating _ functions with linear combinations , which is basically a projection onto a wavelet subspace .for instance , the haar wavelets are piecewise constant functions , and projecting continuous functions onto the haar subspace leads to examples like what is seen in figure [ haar ] .with the type of discrete wavelet analysis being described here , the projection is created by computing a linear combination of function values .for instance , with the daubechies-4 wavelets , we have the following `` low - pass '' filter coefficients : and low - pass filtering is calculated in this way : there are also `` high pass '' filters which are computed by `` differencing '' , as indicated in this formula from the daubechies-4 wavelets : the high - pass filters are good for , among other applications , finding edges in images , and spikes in one - dimensional signals .for instance , a high - pass filter was used in the `` boston pothole project '' that i completed with three students in 2011 .the filters above are for one - dimensional signals , but there are versions for higher dimensions , and for my attempt to 3d print fauvist paintings , i needed something that would work on a two - dimensional image . for simplicity, i decided to use the two - dimensional haar low - pass wavelets filters , which basically smooth an image by averaging pixel values on 2 by 2 blocks .so , we think of the image as a matrix of grayscale luminances ( either in the 0 - 1 range , or the 0 - 255 range , it does nt really matter ) . after dividing the matrix horizontally and vertically into 2 by 2 blocks ,we replace every block with one number which is the average of the 4 values in the block .this creates a new matrix which has half the width and height , and creates a `` smoother '' version of the matrix . applying this methodology several times ( which is a part of what is called the `` pyramid scheme '' in the literature ) leads to smoother and smoother versions of the 3d rendering of demain s genie mask( see figure [ genie2 ] ) .this , in turn , led to the 3d print of _ genie mask _ , seen in figure [ genie3d ] . in 2014 , carol mcinnis , an artist that called herself `` fauvist '' , posted the `` blue '' portrait on the left in figure [ bluegirl ] .i applied my methodology to this image , yielding the 3d printed portrait also seen in figure [ bluegirl ] .( neither carol nor this image can be found on the internet in june 2016 .also note that in the image that i used , i adjust the scaling of the axis . )this is not a very sophisticated application of wavelets , but it accomplished what i wanted , which is usually all that you need in a mathematical modeling project . an interesting direction to pursue for a future project is to apply the daubechies-4 wavelets , either the one dimensional in two directions or the two - dimensional version , to see how that would smooth the original image .one advantage of using these wavelets is that they are applied to 4 by 4 boxes rather than 2 by 2 , so that fewer rounds of smoothing would be needed . some other directions to pursue :is there a good `` fauvist filter '' to apply to school portraits ? if not , is it possible to create one ? and , of course , would this approach work with other fauvist paintings , particularly those which are not portraits ?segerman , h. : `` 3d printing for mathematical visualization '' , _ the mathematical intelligencer _ ,volume 34 , issue 4 , 2012 , pages 56 .
|
in this short , chatty paper , i describe how my attempt to use mathematics to create a 3d print of a school portrait led me a group of early 20^th^ century french artists known as the fauves .
|
trust is an important abstraction used in diverse scenarios including e - commerce , distributed and peer - to - peer systems and computational clouds .since these systems are open and large , participants ( agents ) of the system are often required to interact with others with whom there are few or no shared past interactions . to assess the risk of such interactions and to determine whether an unknown agent is worthy of engagement, these systems usually offer some trust - management mechanisms to facilitate decision support .if a trustor has sufficient direct experience with an agent , the agent s future performance can be reliably predicted .however , in large - scale environments , the amount of available direct experience is often insufficient or even non - existent . in such circumstances ,prediction is often based on trustor s `` indirect experience '' opinions obtained from other agents that determine the target agent s reputation .simple aggregations , like a seller s ranking on ebay , rely on access to global information , like the history of the agent s behavior . alternatively , _ transitive trust models _ ( or web of trust models ) build chains of trust relationships between the trustor and the target agent .the basic idea being that if trusts and trusts , then can derive its trust in using s referral on and s trust in . in a distributed system with many agents and many interactions , constructing such chains requires substantial computational and communication effort .additionally , if the referral trust is inaccurate , the transitive trust may be erroneous .furthermore , trust may not be transitive between different contexts : may trust in a certain context ( e.g. , serving good food ) , but not necessarily to recommend other agents ( a `` meta''-context , or the referral context ) .all the above mentioned approaches aggregate _ the same kind of information _ agents impressions about past transactions , i.e. behavioral history .however , many systems provide a vast context for each transaction , including its type , category , or participants profile .we were curious to see how accurately one could predict trust using such contextual information .the primary objective of this work was thus not to outperform existing trust models using the same information .in contrast , we explore a complementing alternative that can enhance the existing models or to replace them , when the vast information they require is not available ( for instance , as a bootstrapping mechanism ) .the contribution of this work is a trust model that estimates target agents trust using stereotypes learned by the assessor from its own interactions with other agents having similar profile .our work is partly inspired by that study the relation between the reputation of a company and its employees : the company s reputation can be modeled as an aggregate of employees reputations and it can suggest a prior estimate of an employee s reputation . in stereotrust , agents form stereotypes by aggregating information from the context of the transaction , or the profiles of their interaction partners .example stereotypes are `` programmers working for google are more skilled than the average '' or `` people living in good neighborhoods are richer than the others '' . to build stereotypes , an agent has to group other agents ( `` programmers working for google '' or `` people living in good neighborhoods '' ) .these groups do not have to be mutually exclusive .then , when facing a new agent , in order to estimate the new agent s trust , the trustor uses groups to which the new agent belongs to . in casean agent does not have enough past experience to form stereotypes , we propose to construct a stereotypes sharing overlay network ( sson ) , which allows the newcomers to get stereotypes from established agents . in section [ sec : share ] we discuss how a newcomer can then use the stereotypes without completely trusting the agents who produced them .stereotrust assumes that the trustor is able to determine the profile of the target agent , and that the extracted information is accurate enough to distinguish between honest and dishonest agents .the profile of an agent is a construct that represents all the information a trustor can gather .an underlying assumption is that interaction with an agent about which no information is available is rare .for instance , when interacting with people , the profile may be constructed from the agent s facebook profile .similarly , a profile of a company may be constructed from its record in companies registry or yellow pages . generally , an agent s profile is hard to forge as it is maintained by a third party ( e.g. , historical transaction information of an ebay seller ) . in casethe profile is manipulated ( e.g. , in a decentralized environment ) , the trustor may resort to distributed security protection mechanisms or even to human operators to extract the correct information .note that stereotrust will likely determine as useless the parts of the profile which can be easily forged and do not have sufficient discriminating power .while having its own weaknesses and limitations , we think that stereotrust is interesting both academically and in practice .academically , its novelty lies in emulating a human behavior of modeling trust by stereotypes for the first guess about a stranger . in practice , as , first , stereotrust may be applicable in scenarios in which information used by traditional trust models is not available , noisy , inaccurate or tampered .second , stereotrust provides personalized recommendations based on a trustor s own experience ( in contrast to the `` average '' experience used in reputation - based approaches ) .third , if the global information used by a standard trust models is available , it can be seamlessly integrated to our model in order to enhance the prediction ( thus , the term `` stereo '' in the model s name is a double entendre ) .the stereotypes are improved by dividing the original group into `` honest '' and `` dishonest '' subgroups based on available data about past behavior of agents .our experiments show that such a dichotomy based refinement , called d - stereotrust , significantly improves the accuracy .since stereotrust defines groups in a generic way , it may be used in very different kinds of applications ; even within the same application , different agents may have their own personal , locally defined groups . also , the notion of trust itself can be easily adapted to different contexts . in this paper, we use the widely - adopted definition of trust as the trustor s subjective probability that a target agent will perform a particular action . as an example application ,consider judging the quality of product reviews from a web site such as epinions.com .in such a community , users write reviews for products , structured into different categories ( e.g. , books , cars , music ) .these reviews are later ranked by other users .normally , each reviewer has some categories in which she is an `` expert '' ( like jazz albums for a jazz fan ) .the reviewer is more likely to provide high quality reviews for products in these familiar categories . of course, users may also write reviews for products from other categories , but their quality might be inferior , because of , e.g. , insufficient background knowledge .`` mastery '' can be correlated between categories .for instance , audiophiles ( people interested in high - quality music reproduction ) usually know how to appreciate music ; thus , if they decide to review a jazz album , the review is more likely to be in - depth .the correlation might also be negative , as one may not expect an insightful review of a jazz album from , e.g. , a game boy reviewer .when facing an unrated review of a jazz album by an unknown contributor , we can use the information on the contributor s past categories ( game - boy fan or an audiophile ? ) and our stereotypes ( `` noisy '' gamers vs. insightful audiophiles ) to estimate the quality of the review .we use epinions.com data to evaluate stereotrust ( section [ sec : evaluation ] ) : we demonstrate that taking into account reviewers interests provides a good estimation of the quality of the review . consider a very different kind of application a peer - to - peer storage system .if a peer wants to store a new block of data , it needs to choose a suitable replication partner .the suitability of a partner peer depends on the likelihood that the peer will be available when the data needs to be retrieved ( which may depend on its geographic location , time - zone difference , etc . ) ; the response time to access the data ( which may depend on agreements and infrastructure between internet service providers ) ; and many other factors .existing systems usually use a multi - criteria optimization model , and thus need substantial knowledge about the specific peer in question for instance , its online availability pattern , end - to - end latency and bandwidth .applying stereotrust can provide an alternative systems design , where a peer in , say tokyo , can think `` my past experiences tell me that peers in beijing and hong kong have more common online time with me compared with peers in london and new york .likewise , peers in new york and hong kong with a specific ip prefix provide reliable and fast connections , while the others do nt . ''based on such information , if the peer has to choose between a partner in hong kong or in london , the peer can make the first guess that a peer in hong kong is likely to be its best bet , without needing to know the history of the specific peers in question . a `` mixed '' data placement strategy that uses available historical information and stereotypes is expected to result in even better performance .it is worth noting that we are not claiming that it necessarily provides the best possible system design , but merely that it opens the opportunity for alternative designs .we have in fact devised such stereotrust guided data placement strategy for p2p backup systems , resulting in several desirable properties compared to other placement strategies .the core of our contribution is the basic stereotrust model ( section [ basicmodel ] ) and d - stereotrust , the dichotomy based enhanced model using historical information ( section [ sec : enhencedmodel ] ) .we also discuss how to select features to form stereotypes ( section [ preliminary ] ) .feature preprocessing can help to form discriminating stereotypes , and thus to improve the accuracy of the trust assessment .when the trustor does not have sufficient past transactions to form stereotypes , we propose to construct an overlay network to share stereotypes among agents to help inexperienced agents bootstrap the system ( section [ sec : share ] ) .an experimental evaluation using a real - world ( epinions.com ) and a synthetic dataset ( section [ sec : evaluation ] ) shows that stereotypes provide an adequate `` first guess '' and when coupled with some historical data ( d - stereotrust ) , the resulting trust estimates are more accurate than the estimates provided by the standard trust models .intuitively , past mutual interactions are the most accurate source to predict agent s future behavior , but such an approach is unsuitable in large - scale distributed systems where an agent commonly has to assess a target agent with whom it has no past interactions . instead of using only local experience ,many approaches derive the trust from target agent s reputation information aggregated from other agents .for instance , and derive trust from paths of trust relationships starting at the trustor ( the asking agent ) , passing through other agents and finishing at the target agent ( the transitive trust model ) .however , the transitive trust model has several drawbacks , such as handling wrong recommendations , efficiently updating trust in a dynamic system , or efficiently establishing a trust path in a large - scale system .eigentrust is a reputation system developed for p2p networks .its design goals are self - policing , anonymity , no profit for newcomers , minimal overhead and robustness to malicious coalitions of peers .eigentrust also assumes transitivity of trust .the peers perform a distributed calculation to determine the eigenvector of the trust matrix .the main drawback of eigentrust is that it relies on some pre - trusted peers , which are supposed to be trusted by all peers .this assumption is not always true in the real world .first , these pre - trusted peers become points of vulnerability for attacks .second , even if these pre - trusted peers can defend the attacks , there are no mechanisms to guarantee that they will be always trustworthy .additionally , eigentrust ( and some other reputation systems like ) is designed based on distributed hash tables ( dhts ) , thus imposing system design complexity and deployment overheads .in contrast , our proposed model does not rely on any specific network structure for trust management . proposed a trust system using groups .a group is formed based on a particular interest criterion ; group s members must follow a set of rules .the approach assumes that the leader of each group creates the group and controls the membership .the trust is calculated by an aggregative version of eigentrust , called eigen group trust . in eigen group trust ,all the transactions rely on the group leaders , who are assumed to be trusted and always available .this approach requires certain special entities ( i.e. , group leaders ) to coordinate the system , thus may suffer from scalability issue , while stereotrust has no such restrictions .regret combines direct experience with social dimension of agents , that also includes so - called system reputation .system reputation is based on previous experience with other agents from the same _ institution_. unlike stereotrust s stereotypes , regret s institutions exist outside the system , thus each agent can be assigned to an institution with certainty .regret also assumes that each agent belongs to a single institution .blade derives trustworthiness of an unknown agent based on the feedback collected from other agents .this model treats agent s properties ( i.e. , certain aspects of the trustworthiness ) and other agents feedback as random variables . by establishing a correlation between the feedback and the target agent s properties using bayesian learning approach ,the trustor is able to infer the feedback s subjective evaluation function to derive certain property of the unknown target agent .the related works reviewed so far mainly rely on direct experience and/or indirect experience with the target agent to derive trust .in contrast , stereotrust uses another kind of information stereotypes learned from the trustor s own interactions with _ other _ agents having similar profiles .so stereotrust provides a complementing alternative that can enhance the existing models , particularly when the vast information they require is not available .while using stereotypes for user modeling and decision making was suggested previously , to the best of our knowledge the conference version of this paper is the first concrete , formal computational trust model that uses stereotypes .several other papers on similar lines have been published since then , demonstrating the potential of using stereotypes for assessing trust . used stereotypes to address the cold - start issue .the trustor constructs stereotypes relying on m5 model tree learning algorithm .that is , stereotype construction is treated as a classification problem ( i.e. , learning association between features and the expected probability of a good outcome ) , so no explicit groups are built .in contrast , our approach forms and maintains explicit groups and infers the corresponding stereotypes based on aggregated behaviors of the group members .this makes the derived stereotypes easy to be interpreted by the real users .similarly to our stereotype - sharing overlay network , in their work , new trustors can request stereotypes from the experienced ones , and then combine these stereotypes with the target agent s reputation ( if any ) using subjective logic .although similar , our approach has several advantages : ( 1 ) as a basic trust model , stereotrust uses the ( well adapted ) beta distribution to derive stereotypes and trust , thus the resulting decision is easy to be interpreted ; ( 2 ) the work shares agents local stereotypes heuristically ( e.g. , no discussion on how stereotype providers are selected ) while stereotrust offers a more sophisticated sharing mechanism by maintaining a dedicated overlay network for exchanging , combining , and updating stereotypes ; ( 3 ) stereotrust is applicable in practice as demonstrated in real - world dataset ( epinions.com ) based evaluation ( see section [ sec : evaluation ] ) presented in this paper , as well as in other diverse applications and settings in which we have applied stereotype to derive trust . considers the problem of identifying useful features to construct stereotypes .three feature sources are discussed : ( i ) from the social network , i.e. , relationships between agents are features ; ( ii ) from agents competence over time .the target agent s accumulated experience in certain tasks can be viewed as features .an example stereotype may be _ if an agent performed task t more than 100 times , he is considered experienced ( trustworthy)_. ( iii ) from interactions , e.g. , features of both interaction parties . for instance , the trustor with certain features is positively or negatively biased towards the target agent with other features .this work provided a comprehensive summary of feature sources ( for stereotype formation ) from social relationships among agents , but the authors did not apply these features to any concrete application scenarios for validation .such works identifying suitable features for building stereotypes complement stereotrust s abstract computational trust model leveraging on said stereotypes . the stereotypes stereotrust uses can be also regarded as a generalization of various hand - crafted indirect trust or reputation metrics alternatively , these metrics can be interpreted as complex stereotypes ( in contrast to our stereotypes , some of these metrics also involve other agents opinions ) . for instance , two of the metrics proposed by : the transaction price and the savviness of participants ( measured as the focus of a participant on a particular category ) are analogous to our stereotypes .stereotrust also has parallels to the ranking mechanisms used in web search engines .the usage of group information is analogous to using the content of the web pages to rank them .transitive trust models resemble `` pure '' pagerank , that uses only links between pages . similarly to web search , where using both content and links together gives better results , we derive an enhanced method ( d - stereotrust ) , that uses both groups and ( limited ) trust transitivity .we refer to a participant in the system as an agent .we denote by the set of all agents in the system ; and by the set of agents known to agent .an agent can provide services for other agents .a transaction in the system happens when an agent accepts another agent s service . to indicate the quality of a service ,an agent can rank the transaction . for simplicity ,we assume that the result is binary , i.e. , successful or unsuccessful . denotes the set of transactions between service provision agent and service consumption agent and denotes the number of such transactions .we assume that each agent is characterized by a feature vector ] , that , for each group , map agent to the probability that is the member of this group .thus , in the most general model , a group is a fuzzy set of agents . if , it is certain that is member of ( ) ; if , it is certain that does not belong to ( ) .such a group definition makes the agent grouping flexible and personalizable , in contrast to the rigid notion of group membership in regret .the premise of our work is that the trustworthiness of a group ( i.e. , stereotypes ) reflects the trustworthiness of the group members .members of a group should behave consistently ideally , a group should be discriminating , i.e. contain only either honest ( the ones act honestly in transactions ) , or dishonest agents .thus , the key question is : what features of agents can be used to form such discriminating groups ? among many possible approaches to find discriminating features ( such as the gini index , or the chi - square test ) we use the information gain , which is measured by _entropy _ as the criterion to select the features .the resulting decision model is easy to interpret by the users of the system .however , other , more complex approaches to derive decision criteria can be also used , such as decision trees or learning discriminant analysis ( used in our recent work ) .we assume that the categorization is binary ( i.e. , an agent is honest or dishonest ) .we denote the proportion of honest and dishonest agents by and respectively ( ) .then the entropy of the set of agents that trustor has interacted with is calculated as : entropy is used to characterize the ( im)purity of a collection of examples . from eq . [eq : entropy ] we can see that the entropy will have the minimum value of 0 when all agents belong to one class ( i.e. , either all honest or all dishonest ) and the maximum value of when agents are evenly distributed across the two classes .using entropy , we calculate information gain of every feature to determine the best ones . for each feature , we partition the set of agents into subsets based on the values of the feature the agents have ( ) .the subsets are disjoint and they cover .the information gain of feature is then calculated by : the information gain of a feature measures expected reduction in entropy by considering this feature .clearly , the higher the information gain , the lower the corresponding entropy .we can then select features based on their information gains .several methods can be used .for instance , we can first rank the features by information gain ( descending order ) and choose the first features , depending on the specific applications ; or we can set a threshold and select the features whose information gain is higher than .we demonstrate in evaluation ( see section [ sec : evaluation ] ) how such feature selection scheme is applied in epinions dataset to improve trust prediction accuracy . in some cases , a trustor may also want to develop new stereotypes that are associated with combined features .for instance , consider that the agents profile has many fields , including country , gender , income , etc .a trustor already has two stereotypes , say , on people from a country ( with the feature vector ] ) .when the trustor wants to develop a new stereotype on women from country , it needs to combine the feature vectors .a combined feature vector contains all the values of qualitative features from vectors to be combined .thus , in our example , the combined vector is ] and ] , where 0 indicates that the agent is absolutely untrustworthy and 1 indicates that the agent is absolutely trustworthy .the beta distribution is commonly used to model uncertainty about probability of a random event , including agent s reputation .we model a series of transactions between a pair of agents as observations of independent bernoulli trials . in each trial , the success probability is modeled by the beta distribution with parameters and ( we start with , that translate into complete uncertainty about the distribution of the parameter , modeled by the uniform distribution : ) . after observing successes in trials ,the posterior density of is .we choose beta distribution because the resulting decision is easy to be interpreted by the system users , and is an already popular choice for modeling and interpreting trust .the trust function between _ entities _ ( an individual agent or a group ) is defined based on the beta distribution : entity evaluates entity . from the viewpoint of , and represent , respectively , the number of successful transactions and unsuccessful transactions between and ( and ) .trust function mapping trust rating ( ) to its probability is defined by : the expected value of the trust function is equal to : basic stereotrust trust model only uses the trustor s local stereotypes to derive another agent s trustworthiness and hence it works without the target agent s behavioral history that is typically required by conventional trust models . consider a scenario where a service requestor ( a trustor agent ) encounters a potential service provider with whom had no prior experience .we assume that can obtain some features about , such as s interests , location , age etc . combines its previous experience with such information to form groups that help to derive s trustworthiness .stereotrust starts by forming stereotypes , based on the trustor s historic information . in the first step ,appropriate features are selected and/or combined ( section [ sec : featureselection ] ) . using the processed features , stereotrust groups agents accordingly .note that stereotrust considers only groups for which the membership is certain from s perspective ( ) .we denote these groups by such that ( for the sake of simplifying the notation , we will use in place of when the context is clear ) .the trust between and each of these groups is derived based on past interactions with agents that belong to with certainty .thus , from the set of all agents has previously interacted with ( ) , extracts those that belong to ( i.e. , ) .then , counts the total number of successful transactions with as a sum of the numbers of successful transactions with s members : .the total number of unsuccessful transactions is computed similarly . in this way, can comprehensively understand how trustworthy is this group by aggregating its members trustworthiness .finally , uses eq .( [ trustfunction ] ) to derive trust function . to derive agent s trust value, combines its trust towards all the groups in which is a member . by combining multiple group trusts, is able to derive its trust in from multiple aspects .the trust is computed as a weighted sum of groups trust with weights proportional to the fraction of transactions with a group . for group ,weight factor is calculated as a number of s transactions with members ( such that ) ; divided by the total number of s transactions with members of any group .obviously , the higher the number of transactions regarding one group , the more likely is to interact with agents of this group , so this group contributes more to represent s trust from viewpoint of .we define weight factor for as : using the estimated weights , we combine all group trusts to derive s trust . the process of trust calculation is illustrated in figure [ fig : simplemodel ] .s trust by assigning corresponding weight factor ] we propose two approaches to calculate and combine group trusts . in this approach , we first calculate probability density of trust rating foreach group using trust function ( eq . ( [ trustfunction ] ) ) and then combine them to produce s probability density of trust rating using eq .( [ l1w ] ) : where and are aggregated numbers of successful and unsuccessful transactions between and members of group ; is the weight ( fraction of transactions with group ) and is the resulting trust value . in this approach ,we use only one trust function by setting the parameters , i.e. , numbers of corresponding successful and unsuccessful transactions . further divided into two subgroups containing exclusively `` honest '' and exclusively `` dishonest '' agents .when facing a stranger belonging to , a trustor will estimate the similarity between the stranger and each subgroup . ] and dishonest sub group of each group are firstly combined using closeness and then trusts of all groups are combined using weight factor to derive target agent s trust . ]stereotrust model simply groups agents based on agents and transactions profiles . in some scenarios, it can be difficult for stereotrust to accurately predict the performance of an agent who behaves differently from other members of its group(s ) .for instance , consider a case when trustor has interacted with mostly honest agents , while the target agent is malicious .stereotrust will derive high trust for the malicious target agent . to improve prediction accuracy, we propose dichotomy - based enhancement of stereotrust ( called d - stereotrust ) .the key idea is to construct subgroups that divide agents on a finer level than the groups.in d - stereotrust , each top - level group is further divided into two subgroups , an _ honest _ and a _ dishonest _ subgroup ( hence dichotomy - based ) . assigns an agent to either subgroup by analyzing history of its transactions with .the basic criterion we use is that if has more successful than unsuccessful transactions with , is added to the honest subgroup ( and , consequently , in the alternative case is added to ) .several alternative criteria are possible , for instance , the average rating of transactions with .note that , although not completely accurate , such a method indeed helps to separate honest agents from the dishonest ones more accurately than the basic stereotrust . after dividing a group into subgroups ( , ) and determining s trust towards the subgroups ( computed as in the previous section ), d - stereotrust computes how similar is the target agent to the honest and the dishonest subgroup . if `` seems '' more honest , s trust towards aggregated should reflect more s trust towards the honest subgroup ; similarly , if `` seems '' more dishonest , the dishonest subgroup should have more impact on s aggregated trust towards .this process is illustrated on figure [ fig : improvedmodel ] .the closeness , which can be measured by membership of a target agent to subgroup ( where represents or ) is based on other agents opinions about . note that we can not assign to a group ( similarly to any other agent ) , because the grouping described above is based on history with , and , obviously , there are no previous transactions between and .thus , both and are fuzzy ( in ] , where 1 represents a completely trustworthy provider and 0 represents a completely untrustworthy provider .initially , _ trusted stereotype provider _ list is filled with s `` familiar '' agents ( e.g. , friends or colleagues in the real world , etc . ) . when no `` familiar '' agents exist in the system , chooses stereotype providers randomly .after each request , the trustor updates the trust score of the corresponding stereotype provider according to the accuracy of the reported stereotypes ; the accuracy is assessed from the trustor s perspective ( see sec .[ sec : ssonupdate ] ) . to collect correct stereotypes , request the top - k stereotype providers with highest trust scores in the _ trusted stereotype provider _ list , thus limiting communication overhead . notice that sson is constructed to promote correct stereotypes sharing to help inexperienced agents estimate trustworthiness of an unknown service provider , but it is not used to discover trustworthy transaction partner because agents who provide high quality service may not necessarily report correct information about other agents ( and vice versa ) . in the scenario that the agents who provide correct stereotypes also act honestly in a transaction , sson can be used to help promote successful transactions ( i.e. , select reliable agents who have high trust scores as the service providers ) .however , the goal of this work is to design mechanisms to estimate trustworthiness of an unknown service provider ( i.e. , no historical information is available ) , so discussion on relying sson to derive agent s trust like traditional trust mechanisms ( e.g. , feedback aggregation , web of trust , etc . )is out of the scope of this paper . after collecting other agents stereotypes ,we discuss how trustor combines these external stereotypes to derive the trust of the unknown target agent .we adopt a simple weight based strategy to combine the stereotypes in order to compute the final trust score for .the weight of each stereotype depends on the trust ( in terms of providing correct stereotype ) of the corresponding stereotype provider .we denote the trust scores of the stereotype providers by .the weight of stereotype provided by agent is calculated as .weighted stereotypes can be combined by the same methods as discussed in section [ basicmodel ] : the sof or the sop methods .the quality of stereotypes provided by different agents may vary .some agents may maliciously provide fake information ; others may have different perspective on the quality of transactions .an agent using external stereotypes must discover such behavior and update trust scores of the stereotype providers such that inaccurate stereotypes have less impact on the final decision .the problem of deriving trust in a stereotype provider is analogous to the general problem of computational trust . , after observing the outcome of its transaction with , updates stereotype providers trust .this time , however , we define a _ recommendation transaction _ between a trustor and a stereotype provider .the recommendation transaction is successful if the observed outcome of the original transaction ( between a trustor and agent ) is the same as the outcome predicted by s stereotype .the trust in stereotype provider can be then derived based on the number of successful and unsuccessful _ recommendation _ transactions . is thus updated after each transaction .any computational trust model can be used to compute from and . in order to instantiate the model , similarly to the basic stereotrust model ,we use the beta distribution ( section [ sec : trustfunction ] ) . is the expected value of the distribution , computed as .so far , we have presented the basic stereotrust model , its dichotomy based enhancement , as well as a overlay network supporting efficient stereotype information sharing .the rationale for the stereotrust approach is to determine an alternative and complementary mechanism to compute trust even in the absence of ( global ) information that is likely to be unavailable under some circumstances , and instead using some other class of information ( stereotypes ) which can be established by local interactions .since stereotrust utilizes information theory / machine learning to conduct feature selection and feature combination to form stereotypes , it is worth mentioning that the trustor periodically re - trains the model to more accurately derive trustworthiness of the target agent .several methods may be applied to update the model , for instance , the trustor may refine stereotypes after every new transactions ; or the trustor only updates the model when the latest trust prediction is unsuccessful .we will introduce and evaluate different update strategies in the experimental evaluation section ( section [ sec : evaluation ] ) . similarly to other models like eigentrust , transitive trust , blade , regret and travos , stereotrust is explicitly not designed to cope with agent s dynamic behavior .however , from the perspective of behavior science , as well as being supported by recent works that use contextual information to predict user behavior in various information systems , we believe that an agent s behavior change in the transactions is correlated with and can be inferred ( to certain extent ) by the associated contextual information ( e.g. , by considering the dynamic trust ) . for instance , in an online auction site like ebay , a seller may vary his behavior consciously or unwittingly in selling different items ( e.g. , he may be careful when selling expensive goods , but imprudent with cheap ones ) . by selecting appropriate features from the contextual information ( e.g. , the item s price ) ,stereotrust is able to construct stereotypes that partially model the target agents implicit dynamic behaviors . specifically ,when an agent changes its behavior , some of the associated features may also vary accordingly .such dynamism is observed by the trustor who will then adjust its local stereotypes to adapt to the target agent s dynamic behavior ( we will show performance of such a learning based adaptive update strategy in the next section ) .since this work focuses on `` cold - start '' problem of trust assessment , and handing dynamism is a problem in its own right , we leave as a future work a more detailed discussion on addressing behavior changes .in this section , we conduct experiments to evaluate the performance of proposed stereotrust models .we first discuss methodology in [ sec : method ] .[ sec : realdata ] presents the results on the epinions.com dataset ; [ sec : syndata ] presents the results on the synthetic dataset .when comparing stereotrust with other algorithms and approaches , we consider two factors : the _ accuracy of prediction _ that compares the result of the algorithm with some ground truth ; and the _ coverage _ , the fraction of the population that can be evaluated by the trustor , given trustor s limited knowledge . we compare stereotrust with the following algorithms ._ feedback aggregation _: : in this model , if the trustor does not know the target agent , it asks other agents and aggregate feedbacks to derive target agent s trust .note that as the trustor may not have experience with the asked agent , it can not identify the dishonest reporters , thus it may face false feedbacks ._ eigentrust _ : : this model uses transitivity of trust and aggregates trust from peers by having them perform a distributed calculation to derive eigenvector of the trust matrix . the trustorfirst queries its trusted agents ( `` friends '' ) about target agent s trustworthiness .each opinion of a friend is weighted with the friend s global reputation . to get a broad view of the target agent s performance ,the trustor will continue asking its friends friends , and so on , until the difference of the two trust values derived in the two subsequent iterations is smaller than a predefined threshold .pre - trusted agents ( with high global reputation ) are used in this model . _ transitive trust ( web of trust ) _ : : this model is based on transitive trust chains. if the trustor does nt know the target agent , it asks its trusted neighbors ; the query is propagated until the target agent is eventually reached .the queries form a trust graph ; two versions of the graph are commonly considered : + _ shortest path _ ; ; the agent chooses the shortest path ( in terms of number of hops ) and ignores the trustworthiness of agents along the path . if multiple shortest trust paths exist , the trustor will choose the most reliable one ( the agents along the path are the most reliable ) . _ most reliable path _ ; ; the agent chooses the most reliable neighbor who has the highest trust rating to request for target agent s trust .if this neighbor does not know target agent , it continues requesting its own most reliable neighbor . to avoid long paths , the number of hops is limited to 6 in our experiment ._ blade _ : : .this approach models the target agent s properties ( i.e. , certain aspects of trustworthiness ) and the feedback about the target agent collected from other agents as random variables . by establishing a correlation between the feedback and the target agent s properties using bayesian learning approach( i.e. , a conditional probability where denotes the feedback and denotes a property ) , the trustor is able to infer the feedback provider s subjective feedback function to derive certain property of the unknown agent ( e.g. , does a seller ship correct goods on time in an online auction ? ) .this avoids explicitly filtering out unreliable feedback . in other words ,the trustor can safely use feedback from both honest and dishonest providers as long as they act consistently . _group feedback aggregation _ : : d - stereotrust uses opinions reported by the agents who are the members of honest subgroups as the metrics to measure closeness between the target agent and the subgroups .we compare the accuracy of trust value derived using other agents opinions ( called group feedback aggregation ) with that derived using d - stereotrust to validate whether such third party information is used by d - stereotrust judiciously .note that different from the pure _ feedback aggregation _ described above , _group _ feedback aggregation only uses the feedbacks provided by the agents from the honest subgroups . _ dichotomy - only _ : : d - stereotrust divides each group into an honest and a dishonest subgroups . to evaluate the impact of the initial grouping in d - stereotrust , we compare d - stereotrust with a similar , dichotomy - based algorithm , but without the higher - level grouping ( thus without the stereotypes ) . in dichotomy- only , agent classifies all the agents it has previously interacted with into just two groups : honest and dishonest ( `` honest agents '' having more successful transactions with ) . similarly to d - stereotrust , to evaluate an agent , queries the agents from the honest subgroup about their trust to ; using these feedbacks , calculates the distance between and the two groups . to summarize , the feedback aggregation , the eigentrust ( and its variants ) and the transitive trust model ( actually the basis of eigentrust ) are currently the mainstream trust mechanisms .blade , similarly to our approach , uses agent s features to determine trust .group feedback aggregation and dichotomy - only are by - products of our approach ; their results will quantify the impact of each element of ( d-)stereotrust . to estimate the accuracy of each algorithm, we compare the value of trust computed by the algorithm for a pair of agents with the ground truth .then , we aggregate these differences over different pairs using the mean absolute error ( mae ) . besides prediction accuracy, we also measure the coverage of each algorithm ; the coverage is defined as the percentage of agents in the system that can be evaluated by a trustor .we present the accuracy of the algorithms in two formats .first , to measure overall performance of an algorithm , we show the mae aggregated over the whole population of agents ( e.g. , table [ tab : epinionsmae ] ) .second , to see how the algorithm performs in function of agent s true trustworthiness ( the ground truth ) , we construct figures presenting the derived trust for a subset of agents ( e.g. , figure [ fig : epinionsm ] ) .of course , as in the real system , the trust algorithms do not have access to the ground truth . to avoid cluttering ,we randomly choose 50 target agents .the y - axis represents the trust rating of the agents ; the x - axis represents the index of the evaluated agent . for clarity ,agents are ordered by decreasing true trustworthiness .ideally , the ground truth of an agent represents the agent s objective trustworthiness .however , as we are not able to measure it directly , we have to estimate it using the available data . along with the description of each dataset, we discuss how to derive the ground .epinions.com is a web site where users can write reviews about the products and services , such as books , cars , music , etc ( later on we use a generic term `` product '' ) .a review should give the reader detailed information about a specific product .other users can rate the quality of the review by specifying whether it was _ off topic _ , _ not helpful _ , _ somewhat helpful _ , _ helpful _ , _ very helpful _ or _most helpful_. for each review , epinions.com shows an average score assigned by users .epinions.com structures products in tree - like categories .each category ( e.g. , books ) can include more specific categories ( e.g. , adventure , non - fiction , etc . ) .the deeper the level , the more specific the category to which the product belongs .the complete epinions dataset we crawled contains 5,215 users , 224,500 reviews and 5,049,988 ratings of these reviews . for our experiments , we selected 20 trustors and 150 target agents randomly ( we repeated the experiments with different agents and got similar results ) . on the result plots ( e.g. , figure [ fig : epinionsm ] ) ,error bars are added to show the 95% confidence interval of predictions by each trustor for the same target agent in epinions community , users write or rate reviews of products they are interested in .this results in an intuitive grouping criterion : groups correspond to categories of products , and an epinions user belongs to a certain group if she wrote or rated a review of a product in the corresponding category .to map epinions.com to stereotrust model , we treat each user as an agent .epinions.com categories provide a natural representation of the _ interested in _ relation .a user is _ interested in _ a ( sub)category if she wrote or rated at least one review of a product under this category .groups are formed according to agents _ interested in _ relations .consequently , each epinions.com category corresponds to a group of agents , each of whom is _ interested in _( wrote or rated a review for ) this category . note that if there exist multiple such categories ( i.e. , stereotypes ) , in order to improve trust prediction accuracy , we only select the first three ones that have the highest information gains ( see section [ sec : featureselection ] ) . a transaction between agents and occurs when rates a review written by ; the outcome of the transaction corresponds to the assigned rating . to map epinions.com ratings to stereotrust s binary transaction outcome , we assume that the transaction is successful only if the assigned score is _ very helpful _ or _most helpful_. we set the threshold so high to avoid extreme sparsity of unsuccessful transactions ( over 91% review ratings are _ very helpful _ or _ most helpful _ ) .we compute the ground truth of an agent as the average rating of the reviews written by the agent .for instance , if an agent wrote 3 reviews , the first review was ranked by two users as 0.75 and 1.0 respectively , while the second and the third received one ranking each ( 0.75 and 0.5 ) , the ground truth for that user is equal to . note that the `` ground truth '' computed with this simple method only approximates the real trustworthiness of an agent , as we do not adjust the scores to counteract , e.g. , positive or negative biases of the scoring agents . in order to avoid biased ground truth caused by individual erroneous ratings, we removed the reviews with small amount of ratings ( less than 50 ratings in our experiments ) .we select the top 3 features ( i.e. , categories in the experiments ) based on information gain ( see section [ sec : featureselection ] ) for constructing stereotypes .note that all figures in the evaluation section show results with feature selection .figure [ fig : epinionsm ] shows the performance of stereotrust model .sof / sop on the legend indicates that the stereotypes are aggregated using sof / sop approach respectively ( section [ basicmodel ] ) .the results show that both sof and sop approaches fail to provide a good fit to the ground truth .this is because in the epinions dataset , most ratings given by the agents are positive ( _ very helpful _ or _ most helpful _ ) .as most of the trustors have limited direct experience with low quality reviews ( hence dishonest agents ) , using only locally - derived stereotypes it is difficult to predict that an agent will write low quality reviews .figure [ fig : epinionsim ] show the performance of d - stereotrust model .we can see that both sof and sop derived trust ratings are more accurate than feedbacks derived trust rating ( group feedback aggregation ) , which supports that our model outperforms the one which simply aggregates other agents feedbacks .sof approach gives a better fit to the ground truth than sop approach . comparing figure [ fig : epinionsm ] and [ fig : epinionsim ], we observe that d - stereotrust results fit the ground truth better than the stereotrust . thus, as we expected , taking into account other agents feedback improves prediction accuracy .figure [ fig : compareallreal ] compares d - stereotrust model with dichotomy - only ( stereotrust is omitted as it is worse than d - stereotrust in terms of prediction accuracy ) .error bars are removed for clarity and only sof approach , which outperforms sop approach is showed for each model .we see that d - stereotrust model provides more accurate prediction than dichotomy - only does .this proves that by considering both stereotypes and global information we are able to predict target agent s behavior more accurately than using only the global information .[ tab : epinionsmae ] .mean absolute error ( with 95% confidence interval ) .the values separated by ` / ' shows the results with ( left ) and without ( right ) feature selection respectively .epinions.com dataset .shortest path ; mrp most reliable path [ cols="<,^,^",options="header " , ] synthetic dataset enables us also to test the stereotype - sharing overlay network ( sson ) used when no or few past interactions are available . in the experiment ,we select 10 `` inexperienced '' trustors with less than 5 interactions .when encountering a review , an inexperienced trustor requests stereotypes from other , trustworthy agents ( called stereotype providers ) ; and then combines the stereotypes ( see section [ sec : combiningstereotype ] ) .table [ tab : syndatamaesson ] shows the performance of stereotrust when sson is applied ( the values at the left of ` / ' ) .the average mean absolute errors for honest agents , dishonest agents and all agents are , respectively , 0.1277 , 0.1211 and 0.1242 . by comparing with the results without sson ( tab .[ tab : syndatamae ] ) , we notice that even if the trustor does not have sufficient local knowledge , by requesting other agents stereotypes , it is still able to reasonably estimate the trustworthiness of the potential interaction partner . in order to further validate the usefulness of the sson , we let the 10 selected inexperienced trustors collect third party stereotypes randomly for trust estimation ( i.e. , sson is not applied ) . on the average ,sson lowers mae in comparison with random selection by around 13.22% .this result demonstrates that sson evidently improves the performance of stereotrust when trustor s local knowledge is insufficient .d - stereotrust has the highest prediction accuracy ( the smallest mae ) at the cost of incomplete coverage ( 95.5% , table [ tab : syndatamae ] ) .d - stereotrust requires only fragmentary third - party information , as the trustor only asks agents that are also interested in corresponding categories .consequently , d - stereotrust is robust even when up to 40% of agents are malicious .eigentrust not only has lower prediction accuracy ( the difference , albeit small , is statistically significant ) , but also requires a complex , distributed calculation , thus incurring high communication overhead .additionally , eigentrust requires some pre - trusted agents , which may not exist in reality .d - stereotrust uses both stereotypes and historic information . as the accuracy of dichotomy - only is lower, stereotypes indeed improve the prediction accuracy .other standard methods , the feedback aggregation and both variations of transitive trust models , have significantly lower prediction accuracy . moreover , the transitive trust model ( using the most reliable path ) has the lowest coverage .blade model improves prediction accuracy by learning rating providers bias .however , blade requires that the trustor must have sufficient experience with the rating providers such that the target agent s trustworthiness can be reliably inferred .this makes blade ineffective in some cases , and hence lowering its overall accuracy .although stereotrust has lower prediction accuracy than d - stereotrust , stereotrust uses only local information , and no opinions of third - parties .the key to accuracy here are the appropriate stereotypes the more the stereotypes mirror the true honesty of the agents , the more accurate predictions stereotrust will form .we expect that in some contexts such stereotypes can be evaluated by , or formed with , an assistance of a human operator . with sson ( table [ tab : syndatamaesson ] ), the trustors without sufficient local knowledge can predict trust by requesting other agents stereotypes .thus , the coverage of stereotrust model becomes complete . since other agents may provide fake stereotypes maliciously, some of the collected stereotypes may not derive accurate trust .however , by updating trust scores of stereotype providers based on past accuracy of their reported stereotypes , the final aggregated stereotype information is still able to reasonably predict trustworthiness of the unknown agents .we consider the problem of predicting trustworthiness of an unknown agent in a large - scale distributed setting .traditional approaches to this problem derive unknown agent s trust essentially by combining trust of third parties to the agent with the trustor s trust of these third parties ; or simply by aggregating third parties feedbacks about the unknown agent .in contrast , stereotrust uses different _ kind _ of information : that of _ semantic _ similarity of the unknown agent to other agents that the trustor personally knows . in stereotrust, a trustor builds stereotypes that aggregate and summarize the experience it had with different kinds of agents .the criteria by which the stereotypes are constructed are very flexible .for instance , stereotypes can be based on information from agents personal profiles , or the class of transactions they make .so one basic assumption of stereotrust is that such profile information is correctly available .we believe this is a reasonable assumption because it is rare to interact with an agent about which absolutely no information is available . facing a possible transaction with an unknown agent , the trustor estimates its trust by cumulating the experience from the stereotypes to which the unknown agent conforms .the stereotypes are based on the local perspective and local information of the trustor , and , therefore , are naturally suited for large - scale systems ; personalized for each trustor ; and less susceptible to false or unsuitable information from third parties .when some third parties opinions about an agent are available , we propose an enhancement ( d - stereotrust ) , which creates a `` good '' and a `` bad '' subgroup inside each stereotype .the trustor assigns each one of its previous transaction partners to one of these groups based on its personal experience with the partner ( e.g. , the ratio of failed transactions ) .then , the trustor uses the aggregated third parties opinions about the unknown agent to determine how similar is the agent to the `` good '' and the `` bad '' subgroup .third parties opinions are a small subset of information used by traditional mechanisms ( such as feedback aggregation or eigentrust - type algorithms ) . according to our experiments , by combining stereotypes with the partial historic information , d - stereotrust predicts the agent s behavior more accurately than eigentrust and feedback aggregation .stereotrust can be not only personalized for a particular trustor , but also for a particular type of interactions ( classified by groups ) .we are currently working on such extensions of stereotrust , as well as exploring possible concrete applications , including a p2p storage system . while our technique is novel in the context of evaluating trust and provides a new paradigm of using stereotypes for trust calculation instead of using feedbacks or a web of trust it bears resemblance with collaborative filtering techniques .the primary difference is that stereotrust uses only local information in a decentralized system .however , the similarities also mean that while our work proposes a new paradigm to determine trust , the methodology we use is not out of the blue .also , we anticipate that sophisticated collaborative filtering as well as machine learning techniques can be adopted to further improve stereotrust s performance .some nascent attempts in this direction include our recent works .as mentioned in section [ sec : discussion ] , stereotrust is not explicitly designed to handle the scenario where the agents behavior may change over time . however , by studying relevant contextual information for stereotype formation , such a problem can be partially addressed . as part of future work, we intend to work on incorporating various dynamic approaches ( such as learning agents behavior pattern ) to the model . c. burnett , timothy j. norman , and k. sycara .bootstrapping trust evaluations through stereotypes . in _ proceedings of the 9th international conference on autonomous agents and multiagent systems _ ,pages 241248 , toronto , canada , 2010 . c. burnett , timothy j. norman , and k. sycara .sources of stereotypical trust in multi - agent systems . in _ proceedings of the fourteenth international workshop on trust in agent societies _ , pages 2539 , taipei , taiwan , 2011 .sepandar d. kamvar , mario t. schlosser , and hector garcia - molina .the eigentrust algorithm for reputation management in p2p networks . in _ proceedings of the 12th international conference on world wide web _ , www 03 , pages 640651 , 2003 .xin liu , anwitaman datta , krzysztof rzadca , and ee - peng lim .stereotrust : a group based personalized trust model . in _ proceeding of the 18th acm conference on information and knowledge management ( cikm ) _, 2009 .xin liu , tomasz kaszuba , radoslaw nielek , anwitaman datta , and adam wierzbicki . using stereotypes to identify risky transactions in internet auctions . in _ proceedings of the 2010 ieee second international conference on social computing _ , 2010 .sofus a. macskassy . leveraging contextual information to explore posting and linking behaviors of bloggers . in _ proceedings of the 2010 international conference on advances in social networks analysis and mining _ , asonam 10 , 2010 .l. mui , m. mohtashemi , and a. halberstadt halberstadt . a computational model of trust and reputation .in _ system sciences , 2002 .proceedings of the 35th annual hawaii international conference on _ , pages 2431 2439 , 2002 .ajay ravichandran and jongpil yoon .trust management with delegation in grouped peer - to - peer communities . in _sacmat 06 : proceedings of the eleventh acm symposium on access control models and technologies _ , pages 7180 , new york , ny , usa , 2006 .kevin regan , pascal poupart , and robin cohen .bayesian reputation modeling in e - marketplaces sensitive to subjectivity , deception and change . in _ proceedings of the 21st national conference on artificial intelligence - volume 2_ , 2006 .liang sun , li jiao , yufeng wang , shiduan cheng , and wendong wang . an adaptive group - based reputation system in peer - to - peer networks . in _wine 2005 , lncs 3828 _ , volume 0 , pages 651659 .springer , 2005 .gayatri swamynathan , ben y. zhao , and kevin c. almeroth .decoupling service and feedback trust in a peer - to - peer reputation system . in_ proceedings of international workshop on applications and economics of peer - to - peer systems ( aepp ) _ , pages 8290 , 2005 .liu xin , gilles tredan , and anwitaman datta .metatrust : discriminant analysis of local information for global trust assessment . in _ proceedings of the 10th international conference on autonomous agents and multiagent systems ( aamas )
|
models of computational trust support users in taking decisions . they are commonly used to guide users judgements in online auction sites ; or to determine quality of contributions in web 2.0 sites . however , most existing systems require historical information about the past behavior of the specific agent being judged . in contrast , in real life , to anticipate and to predict a stranger s actions in absence of the knowledge of such behavioral history , we often use our `` instinct''essentially stereotypes developed from our past interactions with other `` similar '' persons . in this paper , we propose stereotrust , a computational trust model inspired by stereotypes as used in real - life . a stereotype contains certain features of agents and an expected outcome of the transaction . when facing a stranger , an agent derives its trust by aggregating stereotypes matching the stranger s profile . since stereotypes are formed locally , recommendations stem from the trustor s own personal experiences and perspective . historical behavioral information , when available , can be used to refine the analysis . according to our experiments using epinions.com dataset , stereotrust compares favorably with existing trust models that use different kinds of information and more complete historical information .
|
this paper considers a point - to - point communication scenario where a source denoted by wants to transmit a message to a destination denoted by through a set of independent additive white gaussian noise ( awgn ) channels .the set of independent awgn channels is referred to as the _ parallel gaussian channel _9.4 ) ( also called the _gaussian product channel _ in ( * ? ? ?* sec 3.4.3 ) ) .the parallel gaussian channel has been used to model the multiple - input multiple - output ( mimo ) channel ( * ? ? ?7.1 ) an essential channel model in wireless communications .the parallel gaussian channel consists of independent awgn channels through which the source wants to send a message to the destination .let be the index set of the channels . for the channel use ( or time slot ) , the relation for the channel between the input signal and output signal is where are independent gaussian noises . for each , the variance of the noise induced by the channelis assumed to be some positive number for all channel uses , i.e. , =n_\ell ] , ^t ] respectively .every codeword transmitted by node over channel uses should satisfy the following _ peak power constraint _ where denotes the permissible power for : if we would like to transmit a uniformly distributed message across this channel , it was shown by shannon that as the blocklength approaches to infinity , the maximum rate of communication converges to a certain limit called _capacity_. the closed - form expression of the capacity can be obtained by finding the optimal power allocation among the channels , which is described as follows .define the mapping as where can be viewed as the power allocated to channel . if we let , , , , denote the real numbers yielded from the water - filling algorithm ( * ? ? ?* ch 9.4 ) where and for each and let ^t \label{pellvalue*}\ ] ] be the optimal power allocation vector , then the capacity of the parallel gaussian channel was shown in to be bits per channel use .more specifically , if designates the maximum number of messages that can be transmitted over channel uses with permissible power and average error probability , one has the capacity result has been strengthened by polyanskiy - poor - verd ( * ? ? ?78 ) and tan - tomamichel ( * ? ? ?appendix a ) as where is the gaussian dispersion function defined as and is the cumulative distribution function ( cdf ) of the standard normal distribution . _feedback _ , which is the focus of the current paper , is known to simplify coding schemes and improves the performance of communication systems in many scenarios .see ( * ? ? ?17 ) for a thorough discussion of the benefits of feedback in single- and multi - user information theory .when feedback is allowed , each input symbol depends on not only the transmitted message but also all the previous channel outputs up to time , i.e. , the symbols .it was shown by shannon that the presence of noiseless feedback does not increase the capacity of point - to - point _memoryless channels_. therefore , the feedback capacity of the parallel gaussian channel remains to be . in the presence of feedback ,if we let denote the maximum number of messages that can be transmitted over channel uses with permissible power and average error probability , it follows directly from that the optimal rate satisfies in this paper , the main contribution is a conceptually simple , concise and self - contained proof that in the presence of feedback , the first- and second - order terms in the asymptotic expansion in remains unchanged , i.e. , our work is inspired by the recent study of the fundamental limits of communication over discrete memoryless channels ( dmcs ) with feedback .it was shown by altu and wagner ( * ? ? ?1 ) that for some classes of dmcs whose capacity - achieving input distributions are not unique ( in particular , the minimum and maximum conditional information variances differ ) , coding schemes with feedback achieve a better second - order asymptotics compared to those without feedback . they also showed ( * ? ? ?2 ) that feedback does not improve the second - order asymptotics of dmcs if the conditional variance of the log - likelihood ratio , where is the unique capacity - achieving output distribution , does not depend on the input .such dmcs include the class of weakly - input symmetric dmcs initially studied by polyanskiy - poor - verd .however , we note that the proof technique used by altu and wagner requires the use of a berry - essen - type result for bounded martingale difference sequences , and their technique can not be extended to the parallel gaussian channel with feedback because each input symbol belongs to an interval ] , the first - order coding rate of the awgn channel with feedback can be improved from to ( * ? ? ?ii ) where denotes the tolerable error probability . for the general case , the proof in ( * ? ? ?ii ) can easily be extended to show that the first - order coding rate of the parallel gaussian channel with feedback can be improved from to , and hence no longer holds .this paper is organized as follows .the next subsection summarizes the notation used in this paper .section [ sectiondefinition ] provides the problem setup of the parallel gaussian channel with feedback under the peak power constraint and presents our main theorem .section [ sectionprelim ] contains the preliminaries required for the proof of our main theorem , which include important properties of non - asymptotic binary hypothesis testing quantities and modification of power allocation among the parallel channels. section [ sectionmainresult ] presents the proof of our main theorem .section [ sectionconclusion ] concludes this paper by explaining the novel ingredients in the proof of the main theorem and the major difficulty in strengthening the main theorem .the sets of natural numbers , non - negative integers , real numbers and non - negative real numbers are denoted by , , and respectively .an -dimensional column vector is denoted by ^t ] and ] and ^t (n , m , p , \varepsilon) ] be a real number .the minimum type - ii error in a simple binary hypothesis test between and with type - i error less than is defined as the existence of a minimizing test is guaranteed by the neyman - pearson lemma .we state in the following lemma and proposition some important properties of , which are crucial for the proof of theorem [ thmmainresult ] .the proof of the following lemma can be found in , for example , wang - colbeck - renner ( * ? ? ?* lemma 1 ) .[ lemmadpi ] let and be two probability distributions on some , and let be a function whose domain contains .then , the following two statements hold : 1 .( data processing inequality ( dpi ) ) .2 . for all , .the proof of the following proposition can also be found in wang - colbeck - renner ( * ? ? ?* lemma 3 ) .[ propositionbhtlowerbound ] let be a probability distribution defined on for some finite alphabet , and let be the marginal distribution of .in addition , let be a distribution defined on .suppose is the uniform distribution , and let be a real number in where is distributed according to .then , for each transmitted codeword , we can view as the power allocated to the channel for each . in the proof of theorem [ thmmainresult ] , an early step is to discretize the power allocated to the channels . to this end, we need the following definition which defines the power allocation vector of a sequence .[ definitionpowertype ] the _ power allocation mapping _ is defined as ^t.\ ] ] we call the _ power type of . the proof of theorem [ thmmainresult ] involves modifying a given length- code so that useful bounds on the performance of the given code can be obtained by analyzing the modified code .more specifically , the encoding functions the given code are modified so that the power type of the random codeword generated by the modified code always falls into some small bounding box .the specific modification of the encoding functions is described in the following definition .[ definitiontransformedcode ] given an -feedback code , let , and be the corresponding message alphabet , encoding functions and decoding function respectively .in addition , let and \in \mathbb{r}_+^l f_{\ell , k}(w , \mathbf{y}^{k-1})^2+\sum\limits_{i=1}^{k-1 } \tilde f_{\ell , i}(w , \mathbf{y}^{i-1})^2 \le n(s_\ell+\gamma) f_{\ell , k}(w , \mathbf{y}^{k-1})^2+\sum\limits_{i=1}^{k-1 } \tilde f_{\ell , i}(w , \mathbf{y}^{i-1})^2 > n(s_\ell+\gamma) f_{\ell , n}(w , \mathbf{y}^{n-1})^2+\sum\limits_{i=1}^{n-1 } \tilde f_{\ell , i}(w , \mathbf{y}^{i-1})^2 \in[n(s_\ell - l\gamma ) , n(s_\ell+\gamma)] f_{\ell , n}(w , \mathbf{y}^{n-1})^2+\sum\limits_{i=1}^{n-1 } \tilde f_{\ell , i}(w , \mathbf{y}^{i-1})^2 < n(s_\ell - l\gamma) f_{l , n}(w , \mathbf{y}^{n-1})^2+\hspace{-0.25 in}\sum\limits_{\substack{(\ell , i)\in\\\hspace{0.25 in } \mathcal{l}\times\{1 , 2 , \ldots , n\ } \setminus\{(l , n)\ } } } \hspace{-0.55 in}\tilde f_{\ell , i}(w , \mathbf{y}^{i-1})^2 = np f_{l , n}(w , \mathbf{y}^{n-1})^2+\hspace{-0.25 in}\sum\limits_{\substack{(\ell , i)\in\\\hspace{0.25 in } \mathcal{l}\times\{1 , 2 , \ldots , n\ } \setminus\{(l , n)\ } } } \hspace{-0.55in}\tilde f_{\ell , i}(w , \mathbf{y}^{i-1})^2 < np ] such that .then , the -modified code is formed by 1 .truncating a transmitted codeword if the power transmitted over the channel exceeds , which can be seen from ; 2 . boosting the power of the transmitted codeword if the power transmitted over the channel falls below , which can be seen from the second clause of; 3 . adjusting the last symbol transmitted over the channel ( i.e. , ) so that the total transmitted power is exactly equal to , which can be seen from the second clause of .given an -feedback code , we consider the corresponding -modified code constructed in definition [ definitiontransformedcode ] and let be the distribution induced by the modified code . by , we see that \bigg\}\cap\bigg\{\sum\limits_{\ell=1}^{l}\sum\limits_{k=1}^{n}x_{\ell , k}^2 = np\bigg\}\right\}=1 . \label{modifiedcodeproperty*}\end{aligned}\ ] ] define the -bounding box \times [ s_2-\delta , s_2+\delta]\times \ldots \times [ s_l-\delta , s_l+\delta ] .\label{defgammas}\ ] ] for each and each .it then follows from that the following lemma is a natural consequence of definition [ definitiontransformedcode ] , and the proof is deferred to appendix [ appendixa ] .[ lemmatransformedcode ] given an -feedback code , let be the distribution induced by the code .fix any and any such that , and let be the distribution induced by the -modified code based on the -feedback code .then , we have for all .fix an and choose an arbitrary sequence of -feedback codes . since by, it suffices to show that for all . to this end , fix an arbitrary . using definition [ defcode ], we have for the chosen -feedback code for each . given the chosen -feedback code , we can always construct an -feedback code by appending a carefully chosen tuple to each transmitted codeword generated by the -feedback code such that which implies that in addition , given the -feedback code , we can always construct an -feedback code by appending a carefully chosen to each transmitted codeword generated by the -feedback code such that to simplify notation , we let construct the set of power allocation vectors which can be viewed as a set of quantized power allocation vectors with quantization level that satisfy the equality power constraint it follows from , and definition [ definitionpowertype ] that and let be the probability distribution induced by the -feedback code constructed above for each , where is obtained according to . fix an and the corresponding -feedback code .recall the definition of for each in and define the distribution where the choice of in is motivated by the choice of the auxiliary output distribution in ( * ? ? ?x - a ) where dmcs are considered .then , it follows from proposition [ propositionbhtlowerbound ] and definition [ defcode ] with the identifications , , , , and that using the dpi of by introducing and , we have where by . combining , and , we have fix any constant to be specified later . using lemma [ lemmadpi ] , and, we have which together with implies that define term in is replaced by for any . ] to be the set of power allocation vectors in that are close to the optimal power allocation vector ( cf . ). following , we use to obtain in order to bound the first term in , we let and define be the distribution induced by the -modified code based on the -feedback code defined in definition [ definitiontransformedcode ] . then , consider the following chain of inequalities : where * is due to lemma [ lemmatransformedcode ] .* is due to the definition of in .similarly , in order to bound the second term in , we let be the distribution induced by the -modified code and consider the following chain of inequalities for each : where * is due to lemma [ lemmatransformedcode ] . *is due to the definition of in .combining , , and the definition of in followed by letting for each and each , we obtain where is as defined in . in order to simplify the rhs of, we define such that in addition , for each , let for each . by using , and together with the facts by that and for each , we can express as in order to simplify the first term in , we define for each and want to show that = \lim_{n\rightarrow \infty}\e_{p _ { \boldsymbol{z}^n}^{*}}\left[e^{\frac{t}{\sqrt{n } } \sum\limits_{k=1}^n v_k^{(\mathbf{p}^*)}}\right ] \label{levythmeq1}\end{aligned}\ ] ] for all .to this end , recall the following statements due to the channel law : 1 . for all and all ; 2 . are independent ; 3 . and are independent for all . for any and any such that , since by and for all , we have \notag\\ * & \le \e_{p_{\boldsymbol{x}^n , \boldsymbol{y}^n}^{*}}\left[e^{\frac{t}{\sqrt{n } } \sum\limits_{k=1}^n u_k^{(\mathbf{p}^*)}}\right ] \\ * & \le \e_{p_{\boldsymbol{x}^n , \boldsymbol{y}^n}^{*}}\left[e^{\frac{t}{\sqrt{n}}\sum\limits_{k=1}^n u_k^{(\mathbf{p}^*)}}\cdot e^{t^2\sum\limits_{\ell=1}^l\frac{n_\ell \left(p_\ell + l^2\gamma-\frac{1}{n}\sum\limits_{k=1}^n x_{\ell , k}^2\right ) } { 2(p_\ell+n_\ell)\left(p_\ell+n_\ell+\frac{tp_\ell}{\sqrt{n } } \right)}}\right ] , \label{levythmeq1 * } \end{aligned}\ ] ] which implies by straightforward calculations based on , and the channel law that \cdot e^{-t^2 l^2\gamma \sum\limits_{\ell=1}^l\frac{n_\ell } { 2(p_\ell+n_\ell)\left(p_\ell+n_\ell+\frac{tp_\ell}{\sqrt{n } } \right ) } } \notag\\ & \le \e_{p_{\boldsymbol{x}^n , \boldsymbol{y}^n}^{*}}\left[e^{\frac{t}{\sqrt{n } } \sum\limits_{k=1}^n u_k^{(\mathbf{p}^*)}}\right ] \\ &\le \e_{p_{\boldsymbol{x}^n , \boldsymbol{y}^n}^{*}}\left[e^{\frac{t}{\sqrt{n } } \sum\limits_{k=1}^n v_k^{(\mathbf{p}^*)}}\right ] \cdot e^{t^2 l^2\gamma \sum\limits_{\ell=1}^l\frac { n_\ell } { 2(p_\ell+n_\ell)\left(p_\ell+n_\ell+\frac{tp_\ell}{\sqrt{n } } \right)}}. \label{levythmeq2}\end{aligned}\ ] ] for the sake of completeness , the derivation of can be found in appendix [ appendixc ] . combining and , we conclude that holds for each . since the moment generating functions of and converge to the same function , it follows from curtiss theorem ( * ? ? ?3 ) that recognizing that are independent zero - mean gaussian random variables with variance by the definition of in and the definition of in , we apply the central limit theorem and obtain which together with implies that in order to bound the second term in , we consider a fixed and want to show that there exists some such that for all .to this end , we first define the lagrangian function as where is the unique number that satisfies and and is defined for each as define . then for all , we use taylor s theorem to obtain for some that lies on the line that connects and , where denotes the gradient which satisfies and denotes the hessian matrix that satisfies for the sake of completeness , the derivations of and are contained in appendix [ appendixb ] .combining , and , we have for all which together with the definitions of and in and respectively implies that consequently , holds by setting following , we consider for each where * is due to . *follows from the definition of in .following the standard approach for obtaining large deviation bounds , we apply markov s inequality on the rhs of and obtain for each }{e^{\kappa n^{1/6}+ \sqrt{\mathrm{v}(\mathbf{p}^*)}\ , \phi^{-1}(\varepsilon+\tau)}}. \label{eqnbht8thchain}\end{aligned}\ ] ] in order to bound the rhs of , consider the following chain of inequalities for each : & = \left(\prod_{\ell=1}^l\frac{s_\ell+n_\ell}{(1+n^{-1/2})s_\ell+n_\ell}\right)^{n/2}e^{\sum\limits_{\ell=1}^l\left(\frac{\sqrt{n}s_\ell}{2(s_\ell+n_\ell)}+\frac{n_\ell s_\ell}{2(s_\ell+n_\ell)\left(\left(1+n^{-1/2}\right)s_\ell+n_\ell\right)}\right)}\label{eqnbht9thchaina}\\ & \le \left(\prod_{\ell=1}^l\left(1-\frac{n^{-1/2}s_\ell}{(1+n^{-1/2})s_\ell+n_\ell}\right)^{n/2}e^{\frac{\sqrt{n}s_\ell}{2(s_\ell+n_\ell)}}\right)e^{\sum\limits_{\ell=1}^l\frac{n_\ell s_\ell}{2(s_\ell+n_\ell)^2 } } \\ & \le e^{\sum\limits_{\ell=1}^l\left ( \frac{s_\ell^2}{2((1+n^{-1/2})s_\ell+n_\ell)(s_\ell+n_\ell ) } + \frac{n_\ell s_\ell}{2(s_\ell+n_\ell)^2}\right ) } \label{eqnbht9thchainb}\\ & \le e^{\sum\limits_{\ell=1}^l\frac { s_\ell}{2(s_\ell+n_\ell ) } } \\ & \le e^{l/2},\label{eqnbht9thchain}\end{aligned}\ ] ] where * follows from straightforward calculations based on the definition of in , the property of in and the channel law , which are elaborated in appendix [ appendixd ] for the sake of completeness .* is due to the fact that for all .combining and , we have the following large deviation bound for each : following , we use and to obtain combining , , , , and , we have for all sufficiently large , which together with implies that since is arbitrary , it follows from and definition [ defdispersion ] that mentioned in section [ subsecrelatedwork ] , the proof of ( * ? ? ?2 ) which obtains upper bounds on the second - order asymptotics of dmcs with feedback can not be generalized to the parallel gaussian channel with feedback .indeed , the proof of theorem [ thmmainresult ] follows the standard procedures for obtaining the second - order asymptotics of dmcs without feedback ( see , e.g. , ( * ? ? ?* proof of th .50 ) and ( * ? ? ?iii ) ) except the following three novel ingredients : 1 . instead of classifying transmitted codewords into polynomially many type classes based on their empirical distributions which is generally not possible for channels with continuous input alphabet, we discretize the transmitted power and classify the codewords into polynomially many type classes based on their discretized power types . in particular , the collection of _ power type classes _ in plays a key role in our analysis , and there are polynomially many power type classes by .the details can be found in section [ stepainproof ] in the proof .curtiss theorem rather than berry - essen theorem is invoked to bound the information spectrum term ( the first term in ) related to transmitted codewords whose types are close to the optimal power allocation . in particular , berry - essen theorem for bounded martingale difference sequences can not be used to bound the information spectrum term in the presence of feedback because each input symbol belongs to an interval $ ] that grows unbounded as increases . instead, we apply curtiss theorem to show that the distribution of the sum of random variables in the information spectrum term converges to some distribution generated from a sum of i.i.d .random variables ( i.e. , ) , thus facilitating the use of the usual central limit theorem .the details can be found in section [ stepbinproof ] .3 . in order to bound the information spectrum term related to transmitted codewords whose types are far from the optimal power allocation ( the second term in ), the usual approach is to bound the information spectrum term by an _average _ of exponentially many upper bounds where each upper bound is then further simplified by invoking chebyshev s inequality ( * ? ? ?however , due to the presence of feedback , the information spectrum term can be expressed as only a sum ( instead of average ) of polynomially many upper bounds as shown in the second term in . in order to control the _ sum _ of polynomially many upper bounds, we have to resort to large deviation bounds as shown in rather than the weaker chebyshev s inequality .the details can be found in section [ stepcinproof ] . if the feedback link is absent , the third - order term of the optimal finite blocklenth rate is as shown in in section [ introduction ] .the third - order term can be obtained by applying berry - essen theorem to bound an information spectrum term ( analogous to the first term in ) . in the presence of feedback , theorem [ thmmainresult ]shows that the third - order term is .if we want to compute an explicit upper bound on the third - order term using the current proof technique , an intuitive way is to invoke a non - asymptotic version of curtiss theorem that can measure the proximity between two distributions based on the proximity between their moment generating functions .however , such a non - asymptotic version of curtiss theorem does not exist to the best of our knowledge , which prohibits us from strengthening the current bound on the third - order term .it is worth noting that and in our proof break down if the moment generating functions are replaced with characteristic functions . if one can find a way to make characteristic functions amenable to our proof approach , then a non - asymptotic version of lvy s continuity theorem known as _essen s smoothing lemma _ ( see , e.g. , ( * ? ? ?1.5.2 ) ) may be invoked to tighten the third - order term herein .let and be the encoding functions of the -feedback code and the -modified code respectively for each and each . forany and any such that , \ldots , \left[\begin{matrix } f_{1,n}(w , \mathbf{y}^{n-1 } ) \\\vdots\\ f_{l , n}(w , \mathbf{y}^{n-1 } ) \end{matrix } \right]\right)\in \gamma^{(\gamma)}(\mathbf{s } ) \label{lemmatransformedcodeeq1}\end{aligned}\ ] ] and it follows from , and in definition [ definitiontransformedcode ] that ,\ldots , \left[\begin{matrix } f_{1,n}(w , \mathbf{y}^{n-1 } ) \\\vdots\\ f_{l , n}(w , \mathbf{y}^{n-1 } ) \end{matrix } \right]\right)= \left ( \left[\begin{matrix } \tilde f_{1,1}(w ) \\\vdots\\ \tilde f_{l,1}(w ) \end{matrix } \right ] , \ldots , \left[\begin{matrix } \tilde f_{1,n}(w , \mathbf{y}^{n-1 } ) \\\vdots\\ \tilde f_{l , n}(w , \mathbf{y}^{n-1 } ) \end{matrix } \right]\right ) .\label{lemmatransformedcodeeq3}\end{aligned}\ ] ] since holds for any and that satisfy and , it follows that holds for all .fix any and any .it suffices to show that = \e_{p_{\boldsymbol{x}^n , \boldsymbol{y}^n}^{*}}\left[e^{t\sum\limits_{k=1}^n v_k^{(\mathbf{p}^*)}}\right ] , \label{appendixceq1 } \end{aligned}\ ] ] which will then imply by using that holds . to this end, we consider the following chain of equalities for each : \notag\\ & = \e\left[\e\left[\left.e^{t\sum\limits_{k=1}^{m } u_k^{(\mathbf{p}^*)}}\cdot e^{t^2\sum\limits_{\ell=1}^l\frac{n_\ell \left(np_\ell -\sum\limits_{k=1}^{m } x_{\ell , k}^2\right ) } { 2(p_\ell+n_\ell)\left((1+t)p_\ell+n_\ell\right)}}\right|\boldsymbol{x}^{m } , \boldsymbol{z}^{m-1}\right]\right ] \\ & = e^{t\sum\limits_{\ell=1}^l\frac{p_\ell}{2(p_\ell+n_\ell)}}\sqrt{\prod_{\ell=1}^l\frac{p_\ell+n_\ell}{(1+t)p_\ell+n_\ell } } \cdot\e\left[\e\left[\left.e^{t\sum\limits_{k=1}^{m-1 } u_k^{(\mathbf{p}^*)}}\cdot e^{t^2\sum\limits_{\ell=1}^l\frac{n_\ell \left(np_\ell -\sum\limits_{k=1}^{m-1 } x_{\ell , k}^2\right ) } { 2(p_\ell+n_\ell)\left((1+t)p_\ell+n_\ell\right)}}\right|\boldsymbol{x}^{m-1 } , \boldsymbol{z}^{m-1}\right ] \right ] \label{appendixceq2b } \end{aligned}\ ] ] where is due to the definition of in and the fact that and are independent .applying recursively from to , we have \notag\\ & = \left(\prod_{\ell=1}^l\frac{p_\ell+n_\ell}{(1+t)p_\ell+n_\ell}\right)^{n/2}e^{n\sum\limits_{\ell=1}^l\left(\frac{tp_\ell}{2(p_\ell+n_\ell)}+\frac{t^2n_\ell p_\ell}{2(p_\ell+n_\ell)\left((1+t)p_\ell+n_\ell\right)}\right)}. \label{appendixceq3 } \end{aligned}\ ] ] on the other hand , straightforward calculations based on the definition of in and the fact that are independent implies that & = \left(\prod_{\ell=1}^l\frac{p_\ell+n_\ell}{(1+t)p_\ell+n_\ell}\right)^{n/2}e^{n\sum\limits_{\ell=1}^l\left(\frac{tp_\ell}{2(p_\ell+n_\ell)}+\frac{t^2n_\ell p_\ell}{2(p_\ell+n_\ell)\left((1+t)p_\ell+n_\ell\right)}\right)}. \label{appendixceq4 } \end{aligned}\ ] ] combining and , we obtain .straightforward calculations based on reveal that for all , we obtain that \label{appendixbeq1}\end{aligned}\ ] ] and is a diagonal matrix that satisfies .\label{appendixbeq2}\end{aligned}\ ] ] combining , , and , we have .in addition , for any such that , it follows from that for all , which then implies that holds for all .let . fix any . due to , it suffices to show that \notag\\ & = \left(\prod_{\ell=1}^l\frac{s_\ell+n_\ell}{(1+t)s_\ell+n_\ell}\right)^{n/2}e^{n\sum\limits_{\ell=1}^l\left(\frac{ts_\ell}{2(s_\ell+n_\ell)}+\frac{t^2n_\ell s_\ell}{2(s_\ell+n_\ell)\left((1+t)s_\ell+n_\ell\right)}\right)}. \label{appendixdeq1 } \end{aligned}\ ] ] replacing with in the steps leading to and , we obtain .i. a. ibragimov and y. v. linnik , _ independent and stationary sequences of random variables _ , j. f. c. kingman ,ed.1em plus 0.5em minus 0.4emgroningen , netherlands : wolters - noordhoff publishing , 1971 .
|
this paper investigates the asymptotic expansion for the maximum coding rate of a parallel gaussian channel with feedback under the following setting : a peak power constraint is imposed on every transmitted codeword , and the average error probability of decoding the transmitted message is non - vanishing as the blocklength increases . it is well known that the presence of feedback does not increase the first - order asymptotics of the channel , i.e. , capacity , in the asymptotic expansion , and the closed - form expression of the capacity can be obtained by the well - known water - filling algorithm . the main contribution of this paper is a self - contained proof of an upper bound on the second - order asymptotics of the parallel gaussian channel with feedback . the proof techniques involve developing an information spectrum bound followed by using curtiss theorem to show that a sum of dependent random variables associated with the information spectrum bound converges in distribution to a sum of independent random variables , thus facilitating the use of the usual central limit theorem . combined with existing achievability results , our result implies that the presence of feedback does not improve the second - order asymptotics . asymptotic expansion , curtiss theorem , feedback , parallel gaussian channel , second - order asymptotics
|
from the birth ( 1925 - 1926 ) of quantum mechanics to now , it has already produced some strange , mysterious or anti - intuitive superposed states of quantum systems , for examples , a pure state may be superposed by the ground state and an exited state of an isolated atom without interaction or interchanging energy with its outside , and a pure entangled state , which is also a superposed state and can not be represented as a product of two wave functions describing two subsystems , may still maintain the entanglement after the interaction between the two parts ceases .the property of entanglement is called non - locality and considered as spooky action . in some cases ,after the interaction between two parts of a compound system ceases , a subsystem is considered in a pure superposed state and disentangles with other subsystem .for example , an electron through double - slit is considered in a pure superposed state in standard textbooks of quantum mechanics ; on the other hand , when an electron is going through double - slit , the interaction between the electron and the matter of the double - slit certainly exists , and the state of the two parts evolves into an entangled state , whether the interaction ceases or not , the entanglement should be maintained according to current quantum mechanics , then the electron itself is * not * in a pure superposed state ! therefore an absurd conclusion is obtained from quantum mechanics that a physical process may be considered to obtain two different results . until now , almost all authors of the books of quantum mechanics , for example , von neumann , dirac and landau _ et al . _ , thought that non - degenerate energy eigenstates , for example , and of an isolated atom without interaction with its outside , could be superposed , the reasons may be that not only a particle has wave superposition property according to de broglie s assumption about matter wave , but also the wave function standing for the superposed state satisfies the schrdinger equation which is based on the matter wave property .although feynman said that no one can understand quantum mechanics , including the above strange states , and non - locality , many people have always their own understandings different from current points of quantum mechanics due to the points being not all satisfying . those strange states and indigestible properties made einstein _ think that the theory of quantum mechanics is incomplete , and led to the famous argument of complete property of quantum mechanics between einstein and bohr .the argument had been staying in philosophy until bell gave an inequality or a theorem , which was based on hidden variable theory and local reality theory , trying to test the correctness of non - locality of entanglement in experiments and answer the issue of complete property . from then on, a large amount of investigations have been made to find evidence to prove the non - locality theoretically and experimentally .we do not know how profound the physical significance of principles of quantum mechanics is , but we may satisfy the understandings of it being a bit more profound than current . a free particle or an isolated quantum system is only an assumption , since it is tiny and always subject to the impact from background or heat - reservior .therefore a quantum system always accompanies its outside , and there exists interaction between them .the interaction between two parts in a compound quantum system may be reconsidered more completely than before and some different understandings and conclusions from current quantum mechanics are obtained in this paper , including a strict conservation law in an isolated quantum system in the evolution ( sec.[conservation ] ) , new understandings of duality of particle and wave ( sec.[duality ] ) , measurement ( sec.[measurement ] ) , and the principle of superposition of states ( sec.[superposition ] ) , three laws corresponding to newton s laws ( sec.[newton ] ) , new understanding of the uncertainty relation ( sec.[uncertainty ] ) , support of the locality of einstein _ et al . _ and arguments against the non - locality of any entangled state , and a simple criterion of coherence is obtained for experimenters to examine the correctness of the non - locality ( sec.[non - locality ] ) .section [ conclusion ] is for the conclusions .according to quantum mechanics , the conservation laws of energy , momentum and angular momentum hold only in the sense of a statistical average , not in the strict sense that an isolated quantum system ( single particles or compound quantum systems ) does not interchange these physical quantities with its outside in the evolution and maintains the conservation of the quantities at any time .perhaps most people prefer the conservation laws in the strict sense than in the sense of the statistical average , since the strict law does not contradict with the classical idea , that the interchanging of energy ( momentum or angular momentum ) is owing to interaction between two subsystems and then each of the quantities maintains conservation at any moment in the evolution ; if a quantum system is isolated , i.e. , there is no interchanging of the quantities with its outside , then the quantities will not vary .but we know that the principle of superposition of states and the uncertainty relation make one accept the conservation laws in an average sense . for example , an isolated atom , which is in ( + ) , maintains conservation of energy in the sense of a statistical average in evolution .my understandings of quantum mechanics , including sec.[superposition]and sec.[uncertainty ] , may resume the conservation laws in a strict sense and there is no contradiction among them .it is well known that the scattering of a photon and an electron , which compose an isolated compound system , obeys the conservation law of energy and matter and the conservation law of momentum all the time from the compton scattering experiments .this may be explained as such that the interaction between the photon and the electron interchanges energy and matter , and momentum between them , and there is no interaction or interchanging these physical quantities with their outside .the strict conservation law of momentum and energy are often used in the process of quantum electrodynamics , also due to the existence of an interaction between subsystems and interchanging the physical quantities in the process . in quantum optics ,a simple isolated compound system is composed of a single two - level atom and a single mode quantized field with an interaction between them , the wave function standing for the system state is where is an n - photon state .the photon energy equals the energy difference between the atomic exited and ground levels .the state eq.([eq1 ] ) is an entangled state and the wave function is a superposition of the two degenerate terms that their energies are equal , then it maintains the conservation of energy in the evolution of the isolated compound system all the time .the interaction plays a role of interchanging energy or other physical quantities between the atom and the field .the conservation law may be comprehended as such that _ an isolated quantum system must maintain the conservations of energy , momentum and angular momentum at any time in the evolution , not in a sense of a statistical average ._ the penetration through a potential barrier to a particle seems to violate the conservation of energy .if we consider other matter which offers the potential barrier of interaction with the particle , the whole system evolves an entangled state and can keep the conservation of energy .we think that those states of isolated systems , for examples , ( + ) and ( + , violate the strict conservation law in the evolution , then they do not exist in nature .the observations of some wave phenomena , for example , sound wave , water or liquid wave , elastic wave in solid , let me find out a common point that anyone of these waves has some interaction between particles .a wave is considered as some transmission of a vibration , which is viewed as a source of wave .both transmission and vibration depend on respective interaction .it is easy to find out some interactions between particles in these waves , for example , the interaction between the molecules of atmosphere in sound wave .but the sound wave equation , may easily let one forget interaction .if there were no interaction , these wave phenomena could not appear in matter .so interaction is a requirement of producing or propagating these waves .different interaction produces or propagates different wave .a complex wave , from a wave source , or produced by two or more waves meeting in some place , can be decomposed mathematically into some simple waves and viewed as a superposition of these simple waves .the complex wave must be corresponding to a superposition of some interactions . for light , i.e. , electromagnetic waves , when it meets double - slit or single - slit , the interaction between different parts of light or the interaction with the boundary matter of slit certainly exists , and then it behaves the property of wave , that is , the superposition of different parts .if there is no interaction , the pattern of the interference will disappear and it will not behave the property of wave .one of the most important light wave parameters , wavelength , can not be measured without interaction , interference , for example , first used to measure the wavelength by young in 1801 . from the maxwell equations : we may see that it is some interaction to vary the electric field ( and ) or the magnetic field ( and ) , otherwise they are all static fields . in the wave equation of electromagnetic waves obtained from the maxwell equations , it hides some interaction which makes the electric field and magnetic field vary .the viewpoint of electric and magnetic fields of faraday is that they are all matters .the change of fields in continuous electromagnetic waves may be some action on charges , for example , antenna of radiation , and some interaction among the matters .if a radiation is a pulse , then a photon is produced .we consider one photon passing through double - slit or single - slit , the explanation of its probability wave superposition property is also due to the interaction between single photons and the boundary matter of slit ._ so the wave property of light comes from its particle property plus some interaction , and particle property is more fundamental than wave property .a single photon itself has no probability wave property and the principle of superposition of its probability waves could not hold without interaction . _ for a material particle , an electron , for example , the typical experiments of proving its wave property are the crystal diffraction and the double - slit interference pattern of the diffraction or the interference , the characteristic of the wave superposition property , is also due to the existence of the interactions between an electron and the crystal or the double - slit in the two experiments . if the interaction ceases , or the electrons are far from the crystal or the double - slit , the pattern will disappear ._ an electron may interact with its electromagnetic field , then its wave property may be intrinsic .if there is no interaction , a single neutral material particle itself has no wave property and the principle of superposition of its probability waves could not hold either . _a material particle is subject to tiny action and there exists no area where any interaction does not exist , the vacuum , for example , can not be obtained , then the background field always interacts with the material particle considered , therefore its wave property may be considered intrinsic too .so we can understand that the greater the energy of a particle , the smaller the impact to it produced by background field or photon , then the shorter the de broglie wavelength of it .although the background field is difficult to be eliminated , the interaction between two particles can be controlled to become zero , then the superposition of the waves about the system of the two particles will not exist in nature .the mainstream point of measurement is such as pointed out by dirac that , _ from physical continuity , if we make a second measurement of the same dynamical variable immediately after the first , the result of the second measurement must be the same as that of the first .hence after the first measurement has been made , the system is in an eigenstate of the dynamical variable , the eigenvalue it belongs to being equal to the result of the first measurement ._ this is different from the point of landau _ et al ._ that , _ after the measurement , however , the electron is in a state different from its initial one , and in this state the quantity _ f _ does not in general take any definite value . hence , on carrying out a second measurement on the electron immediately after the first , we should obtain for _ f _ a value which did not agree with that obtained from the first measurement . _ since landau _considered that the measurement of a microscopic system needs some interaction between an apparatus and the measured system , and the compound system evolves an entangled state , i think that their point is a bit better than that of dirac or the mainstream point of measurement .the common of the above two points is that the measured values are all eigenvalues .but i think that most eigenvalues are not observables , which is explained below .if not measuring a microscopic system or no change in the apparatus , we even can not know whether the system exists .therefore a successful measurement of a microscopic system may be read from the change of the apparatus and the system must be also changed .this may be a fundamental of measurement of microscopic world and is very different from that of macroscopic world , which is a comparison with a standard apparatus without disturbing the system .up to now , only the difference of energy eigenvalues could be measured .the energy of a photon ( or other quantized field ) can be measured if the photon is absorbed or destroyed , for example , atomic spectrum .in addition , when we mention the potential energy of a particle in a field , only its difference has physical meaning .an eigenvalue of a material particle in an energy eigenstate may be measured as such that the final state should have zero energy , then the corresponding difference between the energy eigenvalue and zero could be read from the change of the apparatus .however , the final state may not be certainly in the state of zero energy , then the eigenvalue may not be certainly measured .the above observations let me think that the measured physical quantities of a system may be divided into three kinds that we call them : _ inherent vector _ ( for example , spin and photon polarization , the magnitude of it is constant along any direction , and its direction or eigenvalue can be directly measured ) , _ non - inherent vector _ ( for example , velocity , momentum and angular momentum , the magnitude and direction of it are alterable ) and _ scalar quantity _ ( for example , coordinate , potential and kinetic energy ) . a measurement may be completed in an interval of time or of space , and alter the state of the system considered .one direction of an inherent vector can be considered as an eigenvalue of corresponding eigenvector or eigenstate , and can be directly obtained in measurement , and left it being in the direction or eigenvector , while the whole state of the particle or other quantity must be changed . for example , if an electron is initially in one spin eigenstate and is measured in the stern and gerlach experiment , the eigenvalue or direction of spin is measured and the electron still stays in the eigenstate , whereas the direction of the electron motion is changed . for a measured value of other quantities , energy and momentum , for example , only the difference of two eigenvalues can be obtained , not one of their eigenvalues , therefore we can say that _these eigenvalues themselves , other than inherent vector , are not observables , and the system is not in the initial eigenstate after the result is read . _ the statistical explanation of wave function of an isolated compound system then is slightly changed as such that one of the subsystems is only in an eigenstate with some probability , and only the direction of an inherent vector or eigenvalue can be measured , while the other eigenvalues can not .this is different from that in quantum mechanics , that all eigenvalus can be obtained with some probabilities .this is also different from the idea of the physical reality that , _ if , without in any way disturbing a system , we can predict with certainty the value of a physical quantity , then there exists an element of physical reality corresponding to this physical quantity . _some change of a compound system , which is brought by an apparatus in a process of measurement , also brings a break or disappearance of the interaction between parts .it is the break or disappearance of the interaction in a compound system that brings the collapse of the wave function ( reduction of wave packet ) or disentanglement , and evolves a new entangled state if new interaction appears .in his book , zhang points out that interactions always have the effect of non - linearity , which conflicts to the linearity of the principle of superposition of states ; the interaction potential in the schrdinger equation has been treated with _ external field approximate _ and then has an approximate linearity .this approximate linearity may suit for the linearity of the principle .therefore the exact consideration of non - linearity of interaction must destroy the linearity of the principle , and the linearity may make some superposed states deviate real states and become strange . but a large number of results obtained from quantum mechanics with the linearity accord with results of experiments .so this approximate linearity is good enough , and the non - linearity of interactions can not be used to explain the strange superposed states in sec.[introduction ] .the hidden variable theory has been produced to explain the strange superposed states , but hidden variable is still mysterious up to now .the exhaustive explanations of the strange superposed states may be very difficult , but we shall satisfy the decrease of the mysterious extent of the strange states with the following understanding .the schrdinger equation of an isolated compound system of two particles is where is the total wave function of the two particles , , and are the kinetic energy operators of the particles and the interaction potential energy between them , respectively .we suppose that and are the complete collections of the kinetic energy eigenstates of the particles , and and are the kinetic energy eigenvalues , respectively . according to quantum mechanics, can be expanded by the collection as if has two or more terms , it is an entangled state .different term in may have different kinetic energy , while the total energy , corresponding to the different term may maintain conservation in the evolution , due to the interaction interchanging energy within these three parts of the isolated system . those states , in which total energies of different terms are different , should not exist in nature . if , still satisfies the schrdinger equation ( [ eq4 ] ) , and maintains entanglement according to quantum mechanics .this is a strange state that it may be in different total energy eigenstates and each one of the two isolated particles may be in different kinetic energy eigenstates without interaction with its outside , which contradicts the strict conservation law of energy .according to our observation , if there is no interaction or interchanging energy with each other , the principle of superposition of waves will not hold , then there is only one term in , i.e. , there is no entanglement .if we do not know the classical information of preparation of the two particles , we have to describe the state in a mixed state it is a disentangled mixed state .the wave function can also be expanded by the complete collection of other eigenstates , momentum of one dimension , for example , the momentum may be interchanged between the two particles , so the total momentum corresponding to the different term should be conserved , or these terms are degenerate for momentum . the schrdinger equation, has been used to describe the dynamic evolution of a particle as a probability wave by a wave function , and generalized to micro - systems .the potential is obviously expressed in the equation , and it is considered as an external field approximation , and a part interacted with the particle or the micro - system is neglected in eq.([eq8 ] ) .if there is no , a particle or a micro - system should be isolated and there is no energy , momentum and angular momentum exchanging with its outside ._ so an isolated particle or micro - system should have definite energy , momentum and angular momentum , and no state of a particle or a micro - system is any superposition of different eigenstates of the three physical quantities . _ on the electron double - slit experiment discussed by wave function , we suppose that denotes the state of the electron passing through the slit 1 and denotes the state of the slit matter corresponding to , and are similar .although the interaction potential between the slit matter and the electron passing through the slits may be very complicated , and has not been expressed in the hamiltonian , it always exists .we may explain it as such that the compound system is in some state which is an entangled state , and the electron will be in an approximately superposed state , i.e. , the popular one in quantum mechanics if the states and are considered same approximately , therefore we can also use the state ( [ eq9 ] ) to explain the pattern of interference of the electron through double - slit as well as the state ( [ eq10 ] ) . if the state ( [ eq9 ] ) is not considered as approximately equal to the state ( [ eq10 ] ) , the pattern will be different .this explanation is a bit different from that in current quantum mechanics .the state ( [ eq9 ] ) is less mysterious than the state ( [ eq10 ] ) , we shall satisfy the decrease of the mysterious extent . in quantum optics ,the wave function ( [ eq1 ] ) is a superposition of the two terms that their energies are equal and then it maintains the strict conservation of energy .the atom will be in an approximately superposed state if is large. otherwise it may be in a mixed state , which is described by a reducible density operator by tracing out the part of photons , and not in a pure superposed state of its two energy eigenstates . if the state ( [ eq11 ] ) is exact , a novel cat state , similar as ( [ eq11 ] ) , must be stranger than the schrdinger s cat state , similar as ( [ eq1 ] ) .the novel cat state is in a directly superposed state of dead cat and live cat states , or a cat is in wheeler home and in einstein home at same time and there is no reason , while if the schrdinger s cat is live , the reason is due to the cover of the toxicant bottle unopened .therefore the state ( [ eq10 ] ) or ( [ eq11 ] ) is only an approximately superposed state and still relate with interaction , that is , the exact superposed state ( [ eq10 ] ) or ( [ eq11 ] ) is only an assumption and will not exist in nature . if the interaction ceases , the state ( [ eq9 ] ) or ( [ eq1 ] ) will be the state ( or ) or ( or ) and the two parts will not be in an entangled state .this agrees with the locality viewpoint by einstein __ that , _ at the time of measurement the two systems no longer interact , no real change can take place in the second system in consequence of anything that may be done to the first system ._ the other two similar examples are coherent and squeezed states of radiation fields .they have been expressed respectively in different superposition of photon number states with different probability amplitudes , or each one is in a pure superposed state .these states can not maintain the strict conservation of energy in the evolution .when these states are preparing , the field and the apparatus have interaction and evolve an entangled state .after the interaction ceases , the entanglement can be maintained according to quantum mechanics , then the field can not be in a pure superposed state of photon number states , i.e. , coherent or squeezed state can not exist in nature . this prepared field may be some photon number state and can be described as a mixed state . if the interaction does not cease and the states of the apparatus are approximately considered same , then we can obtained approximate coherent or squeezed state .the vacuum fluctuation may also be explained as a result of the states of other matter being approximate same .there are a large amount of approximate solutions of the schrdinger equation of systems in quantum mechanics since exact solutions are difficult to be obtained , while the states ( [ eq10 ] ) and ( [ eq11 ] ) are viewed as exact solutions , so why their physical meaning are difficult to be understood is because of neglecting the state of other matter ( or external field approximate ) and even the interaction between the matter and the system considered .however , it was probably this neglect that brought the hidden variable presented for explaining some strange properties of a quantum system and even all other matter is considered if the decoherence of a superposed state is discussed , the two extremities make quantum mechanics more indigestible .the above cases hint us that the understanding of the principle of superposition of states in quantum mechanics may be changed as such that , _ in non - relativistic quantum mechanics , only a superposed state of inherent vector ( spin and photon polarization ) of a single particle could exists in physics , other superposed states exist only in compound systems with interaction between subsystems and are entangled states , interaction and strict conservation law are new constrain conditions . _a state has been believed to be mathematically expanded as a superposed state of eigenstates of a conserved physical quantity , but it has not been exactly proved .we think that the expansion may not be a single , i.e. , a state may be mathematically expanded as a superposed state of not only non - degenerate eigenstates of a conserved physical quantity , but also degenerate eigenstates , for example , two terms in eq.([eq1 ] ) , the latter has physical meaning while the former no .the degenerate states of a system may be those of energy , or momentum , or angular momentum .the schrdinger equation of a hydrogen atom system is transformed to a single particle s equation in an equivalent potential field .we can solve the equation to obtain the energy eigenstates and eigenvalues of the atom .it can be in some energy eigenstate , but it can not be in a superposed state of non - degenerate energy eigenstates according to our understanding .when it is in an energy eigenstate , the hydrogen atom system composed of an electron and a proton can be in an entangled state , therefore they may be in different states due to the interaction interchanging energy among three parts ( the kinetic energies of the electron and the proton , and interaction potential energy ) .the energy of the atom can also be divided into two parts : one is the sum of the kinetic energy of the electron and interaction potential energy , and the other is the kinetic energy of the proton .momentum or angular momentum can also be interchanged between the electron and the proton .since the operators of energy , momentum and angular momentum of a free or isolated particle commute with each other , they have common eigenfunctions . by use of our understanding, we may obtain a law corresponding to the newton s first law that _ a free particle must be in an eigenstate ( an extrapolated wave function , that is , a plane wave ) having definite energy , momentum and angular momentum in some inertial reference frame . _but we do not know what state the free particle is in after it is prepared , so we have to describe it in a mixed state of other type ( described by more than one wave function or also by a density operator ) and not in a superposed state . _the schrdinger equation may be corresponding to the newton s second law , _ for they are all dynamic equations of their respective system . in a compound system with interaction ,the states of all parts should be considered in the schrdinger equation for exactness or understanding . _the principle of superposition of states in a compound system with new understanding may be corresponding to the newton s third law_.the uncertainty relation was obtained due to the fact that the non - commuting operators , momentum and coordinate , for example , corresponding to two physical quantities have no common eigenfunction .if the system is in an eigenstate of one operator , it can not be in an eigenstate of the other .this led einstein et al .present the physical reality viewpoint above .but we can understand it as such that the momentum of a free particle keeps definite value in any coordinate , i.e. , when is an eigenvalue , the particle has no a definite coordinate ; therefore it is not strange .but self - contradiction appears below . according to the mainstream point of measurement pointed out by dirac ,a measurement value of a system is one of the eigenvalues , with some probability , associated with the eigenfunctions in the wave function , and the state left is the eigenstate ; if the quantity is measured immediately , the energy , for example , the same eigenvalue may be obtained , the difference value of two results , but the interval between two measurements , according to the point of landau _ et al . _ , is not infinite , then .therefore the measurement idea of quantum mechanics contradicts with the uncertainty relation of energy - time .if we associate the uncertainty relation with our new understanding of measurement above , the difference of energy or momentum is measured at least one photon s .then the uncertainty relation of energy - time can be re - explained as the following .if the energy of one photon emitting from an atom is measured , the atom has decreased the same energy .therefore , where , , and are planck s constant , frequency and _ period _ of the photon .the time needed in one measurement may be equal or great than the period , i.e. , , then we obtain , this is different from the meaning and the formula of the uncertainty relation , , in which represents the difference between two measured eigenvalues and a time interval of two measurements ; _ while the and come from only one measurement , not two . _ similarly , in direction the least change of momentum of a particle is also a photon s where is the light speed . a measurement may be completed in the extent , then , which is also different from the meaning and the formula of the uncertainty relation , .according to our understandings of wave and the principle of superposition of states , the entanglement in a compound system is produced and maintained by interaction among the parts .the principle of indistinguishability of identical particles , which is based on exchange interaction , seems to be other reason to produce and maintain entanglement without interaction potential energy in the hamiltonian of the system . but a pure entangled state and its corresponding mixed state ( for example , eqs.([eq14 ] ) and ( [ eq16 ] ) below ) are all fit for the principle , since the expressions of the two states are completely equivalent respectively by exchanging two particles , and then equivalent in physics , so we think that exchange interaction is imaginary and different from other real interactions in hamiltonian , and then not the reason of producing and maintaining entanglement .if the interaction between two identical particles , other than the exchange interaction , ceases , the superposed state of a compound system will collapsed , i.e. , entangled state will not exist .having reinvestigated bell s theorem ( inequality ) and later ones and some related experiments , we can not find out that any given pairs of particles without entanglement was used in the experiment same as the same particles with initial entanglement , then there is no comparison of measurement results of the two states , furthermore no average result of coherent probability surpasses 75% ( the ideal classical coherent probability explained below ) . in deduction of his inequalitystarting from hidden variable theory and local reality theory , bell had used the formula of coherence of an entangled spin state of two electrons , , which is a result of quantum mechanics and can not be obtained from the formulae and , in the case there is no interaction between the two electrons .if the formula comes from some experiment , then the non - locality of an entangled state has been proved and we do not need the inequality . in all experiments to test bell s theorem and later ones , two loopholes , that one is low detection loophole and the other is locality or lightcone loophole , which is about two parts of an entangled state being no spacelike separate associated with measurement , can not be closed at same time . other type experiment for testing the non - locality of entangled state is quantum ghost interference . in the experiment ,very few e - ray photons pass through the double slits to photon counting detector , meanwhile many o - ray photons reach the other detector , then the output pulses of the detectors , sent to a coincidence circuit with nsec coincidence time window , may not be a pair of initially entangled photons .so this experiment is still not enough to prove the non - locality of an entangled state .in the following , we compare the calculations of coherent probability of an entangled state , which is assumed to maintain the entanglement when the interaction between two parts ceases , and that of some probable disentangled mixed states .we first consider two distant identical particles in different energy eigenstates and , respectively .suppose that the interaction between them is only in a very small area , and when they approach and interact with each other , their state will evolve in an entangled state according to quantum mechanics , the state will maintain the entanglement when they apart from each other and the interaction between them ceases , and the state form can be rewritten as where and . in the experiment , the states of one particle superposed by energy eigenstates , similar as and , are considered to be produced by raman beam . but according to our understanding , the energy state of the single particle and raman beam is an entangled state and the single particle itself is not a pure superposed state , then the entangled state in eq.([eq14 ] ) can not be written as eq.([eq15 ] ) .when the interaction of the two identical particles ceases , the entangled state collapses into the state or , which can be described by density operator form a disentangled mixed state. we can not distinguish the mixed state and the entangled state by measuring difference of energy of two eigenstates of a single particle or even energy eigenvalues with the measurement concept of quantum mechanics , that is , if we measure their energies , we may all obtain 100% coherent probability , therefore the non - locality of entangled state of this type can not be proved experimentally .cohen discovered that a mixed state of two same subsystems could be written as or due to in mathematically .cohen thought that there exists hidden entanglement .but we think that the mixed state will be expressed by eq.([eq17 ] ) ( no entanglement ) if the interaction between the two particles ceases after the system state is prepared .if some interaction between them exists and and represent the degenerate energy eigenstates of a subsystem , the mixed state may be expressed by eq.([eq18 ] ) , which exists entanglement .so this is also a defect of density operator that it expresses two different mixed states , entangled and disentangled ones .next , we consider the polarized ( inherent vector ) state of two photons .if a pair of photons , with the horizontal state and vertical state respectively , enter into a beam splitter and interact , the two photons may be in the singlet state where and represent and polarized photons with the angle between and .after they come out of the beam splitter , the interaction disappears and the entangled state collapses in a mixed state according to our understanding , but their state maintains entanglement in eq.([eq19 ] ) according to current quantum mechanics .we do not know the scheme of collapse of wave function , we guess that the first probable mixed state may be in an ensemble of or in different angle with identical probability density , and the second may only be in one of the states or with same probability .if we select two measurement bases and , all results will be 100% coherent probability for entangled state , that the polarizations of the two photons must be perpendicular .but , if the measured state is a disentangled mixed state , our calculation in an average is 75% , for the first mixed state by the same measurement way as above , that is , if one photon is measured in state , the other photon is measured in state with 75% , and 100%% with for the second case , and the average coherent probability is also 75% .so we can distinguish experimentally which state , entangled or disentangled , the measured coherent probability of initially entangled state belongs to if the probe efficiency is high enough .we may prepare some pairs of particles without entanglement and do same experiments as measuring the initially entangled states of same particles , then we can compare the measurement results to discover whether the initially entangled state has been disentangled . if the ratio of results of coherent probability is approximately 3:4 , the latter states maintain their entanglement ; if the ratio of that is near equal , then the latter states have been disentangled .this way may be used to test the non - locality of an entangled state in the case of low probe efficiency .in this paper , i consider the interaction in a quantum system more completely than before , and produce some new understandings and conclusions of quantum mechanics .these may make quantum mechanics be a bit more easily understood intuitively and some strange properties will not appear , for example , a superposed state of a free particle , except inherent vector , and the non - locality of an entangled state will not appear .the new understandings and conclusions are : in non - relativistic quantum mechanics , only a superposed state of inherent vector ( spin and photon polarization ) of a single particle could exists in physics , other superposed states exist only in compound systems with interaction between subsystems and are entangled states , interaction and conservation law are new constrain conditions .therefore the complete consideration of interaction may make the understandings of quantum mechanics a bit more profound than before , and also produce a viewpoint same as the locality .we may not need for the moment some hidden variable theory and a complete theory of quantum mechanics that einstein believed to be produced in future .i acknowledge my colleagues : jian zou , feng wang , bin shao , xiu - san xing and jun - gang li for discussions or comments , thank my former classmates : li - fan ying and gui - qin li for useful helps , especially thank my former teachers : pei - zhu ding and shou - fu pan for their encouragements .w. tittel , j. brendel , b. gisin , t. herzog , h. zbinden , and n. gisin , phys .a * 57 * , 3229 ( 1998 ) . v. neumann ,_ mathematical foundations of quantum mechanics _( princeton university press , princeton , 1955 ) , pp .p. a. m. dirac , _ the principle of quantum mechanics _( science press , beijing , 2008 ) , pp .l. d. landau and e. m. lifshitz , _ quantum mechanics _( pergamon press ltd . ,oxford , 1977 ) , pp .28 , 23 - 24 .l. de broglie , nature , * 112 * , 540 ( 1923 ) .r. p. feynman , _ the character of physical law _( the m. i. t. press , massachusetts , 1965 ) , pp .a. einstein , b. podolsky , and n. rosen , phys . rev . * 47 * , 777 ( 1935 ) .n. bohr , phys . rev . * 48 * , 696 ( 1935 ) .j. s. bell , physics * 1 * , 195 ( 1964 ) .d. bohm , _ quantum theory _( prentice - hall inc ., new york , 1951 ) , chap . 2 , 5 . j. f. clauser , m. a. horne , a. shimony , and r. a. holt , phys .* 23 * , 880 ( 1969 ) .l. hardy , phys . rev. lett . * 71 * , 1665 ( 1993 ) .d. m. greenberger , m. horne , and a. zeilinger , phys .a * 78 * , 022110 ( 2008 ) .a. aspect , p. grangier , and g. roger , phys .* 47 * , 460 ( 1981 ) .d. v. strekalov , a. v. sergienko , d. n. klyshko , and y. h. shih , phys .lett . * 74 * , 3600 ( 1995 ) .d. bouwmeester , j. w. pan , m. daniell , h. weinfurter , and a. zeilinger , phys .. lett . * 82 * , 1345 ( 1999 ) .m. a. rowe , d. kielpinski , v. meyer , c. a. sackett , w. m. itano , c. monroe , and d. j. wineland , nature * 409 * , 791 ( 2001 ) .h. compton , phys .* 22 * , 409 ( 1923 ) ; y. h. woo , phys . rev . * 27 * , 119 ( 1926 ) .w. greiner and j. reinhardt , _ quantum electrodynamics _( springer - verlag berlin heidelberg , 1994 ) , chap . 3 .d. f. walls and g. j. milburn , _ quantum optics _( springer - verlag berlin heidelberg , 1994 ) , pp . 12 - 18 , 204 .r. p. feynman , _ the feynman lectures on physics _ , vol.i ( pearson education asia limited and beijing world publishing corporation , 2004 ) , chap .m. born and e. wolf , _ principles of optics : electromagnetic theory of propagation , interference and diffraction of light _ ( publishing house of electronics industry , beijing , 2005 , chinese ) , preface . c. davisson and l. h. germer , nature , * 119 * , 558 ( 1927 ) .y. d. zhang , _ quantum mechanics _ ( science press , beijing , 2008 , chinese ) , pp . 4 , 31 - 32 .a. d. aczel , _ entanglement : the greatest mystery in physics _ ( shanghai scientific & technological literature publish house , shanghai , 2008 , chinese ) , pp .o. cohen , phys .lett . * 80 * , 2493 ( 1998 ) .
|
the interaction between two parts in a compound quantum system may be reconsidered more completely than before and some new understandings and conclusions different from current quantum mechanics are obtained , including a strict conservation law in the evolution in an isolated quantum system , new understandings of duality of particle and wave , measurement , and the principle of superposition of states , three laws corresponding to newton s laws , new understanding of the uncertainty relation , support of the locality of einstein _ et al . _ and arguments against the non - locality of any entangled state , and a simple criterion of coherence which is obtained for experimenters to examine the correctness of the non - locality . these may make quantum mechanics be a bit more easily understood intuitively .
|
consider the nonparametric errors - in - variables ( eiv ) regression model classical measurement error = 0 , \\w = x+\varepsilon , \end{cases } \label{eq : eiv}\ ] ] where each of , and is a univariate random variable , and is independent from .we observe , but observe neither nor .furthermore , we assume that the distribution of is unknown .the variable is a latent predictor variable , while is a measurement error . of interestare estimation of and inference on the regression function ] .let denote the equality in distribution .in this section , we informally present our methodology to construct confidence bands for .the formal analysis of our confidence bands will be carried out in the next section .we will also discuss some examples of situations where an auxiliary sample from the measurement error distribution is available .we first introduce a deconvolution kernel method to estimate and under the assumption that the distribution of is known .let be an independent sample from the distribution of .in this paper , we assume that the densities of and exist and are denoted by and , respectively . let , and denote the characteristic functions of , and , respectively . by the independence between and , the density of exists andis given by the convolution of the densities of and , namely , where denotes the convolution .this in turn implies that the characteristic function of is identical to the product of those of and , namely , provided that is non - vanishing on and is integrable on with respect to the lebesgue measure ( we hereafter omit `` with respect to the lebesgue measure '' ) , the fourier inversion formula yields that the expression ( [ eq : deconvolution ] ) leads to a method to estimate .however , simply replacing by the empirical characteristic function of , namely , does not work .specifically , the function is not integrable on because as by the riemann - lebesgue lemma while is the characteristic function of the discrete distribution ( i.e. , the empirical distribution ) and ( cf .* proposition 27.28 ) . a standard approach to dealing with this problem is to use a kernel function to restrict the integral region in ( [ eq : deconvolution ] ) to a compact interval .let be a kernel function such that is integrable on , , and its fourier transform is supported in ] , and ] . hence /\varphi_{\varepsilon}(t) ] ; these are fredholm integral equations of the first kind where the right hand side functions are directly estimable. rates of convergence and pointwise asymptotic normality of are studied by , among others .the discussion so far has presumed that the distribution of is known . however , in many applications , the distribution of is unknown , and hence the estimators and are infeasible . in the present paper ,we assume that there is an independent sample from the distribution of : where as .we do not assume that are independent from . in section [ section examples] , we will discuss examples where such observations from the measurement error distribution are available .given , we may estimate by the empirical characteristic function , namely , and estimate the deconvolution kernel by the plug - in method : note that under the regularity conditions stated below , with probability approaching one , so that is well - defined with probability approaching one .note also that is real - valued .now , we estimate by , where density estimators of the form are studied in , , and , among others , and nonparametric regression estimators of the form are studied in , among others .we now describe our method to construct confidence bands for based on the estimator . under the regularity conditions stated below ,we will show that can be approximated by \ ] ] uniformly in , where is a compact interval in on which is bounded away from zero , and ] is negligible relative to so that we have ignored )^{2} ] of does not vanish on the entire real line .both and we ( and in fact most of papers on deconvolution and eiv regression ) assume that the characteristic functions of the error variables do not vanish on , but our approach does allow to take zeros .the assumption that does not vanish on is not innocuous ; it is non - trivial to find densities that are compactly supported and have non - vanishing characteristic functions ( though these properties are not mutually exclusive ; see , e.g. , , footnote 4 ) , and the assumption excludes densities convolved with distributions whose characteristic functions take zeros , and so on . uniform densities on ] .for npiv models , and the more recent paper by develop methods to construct confidence bands for the structural function using series methods , although these papers do not formally consider cases where samples on and are different .however , we would like to point out that there are difference in underlying assumptions between series estimation of npiv models and deconvolution kernel estimation in eiv regression .for example , in series estimation of npiv models , it is often assumed that the distribution of is compactly supported and the density of is bounded away from zero on its support ( cf . * ? ? ? * ; * ? ? ? * ) . on the other hand , in eiv regression, it is commonly assumed that the characteristic function of the measurement error is non - vanishing on ( which leads to identification of the function via ( [ eq : identification ] ) ) , and in many cases the measurement error then has unbounded support , which in turn implies that has unbounded support .further , while both npiv and eiv regressions are statistical ill - posed inverse problems , the ways in which the `` ill - posedness '' is defined are different ; in series estimation of npiv models , the ill - posedness is defined for given basis functions , while in eiv regression , the ill - posedness is defined via how fast the characteristic function of the measurement error distribution decays . hence we believe that our inference results cover different situations than those developed in the npiv literature .in this section , we study asymptotic validity of the proposed confidence band ( [ eq : proposed band ] ) . to this end , we make the following assumption . for any given constants , let denote a class of functions defined by where is the integer such that , and denotes the -derivative of ( ) .let be a compact interval in .[ as : mean ] we assume the following conditions . 1. < \infty ] is bounded and continuous , and for each , the function f_{w}(w) ] and ] , and its characteristic function , , t \in { \mathbb{r}} ] .furthermore , is -times continuously differentiable with for .6 . for all , and f_{w}(x ) condition ( i ) is a moment condition on , which we believe is not restrictive .note that , for each , if = { \mathrm{e } } [ |y|^{2+\ell } \mid x] ] , and the right hand side is bounded and continuous if is bounded ( which allows to be unbounded globally ) . for condition ( ii ) , we first note that is the fourier transform of ( which is integrable by < \infty ] ( see the proof of lemma [ lem : moment bound]-(ii ) ) .note that since is bounded , we have that it is worth mentioning that under these conditions , we have that uniformly in ( see lemma [ lem : moment bound ] ) , and the right hand side is larger by factor than the corresponding term in the error - free case ( recall that in standard kernel regression without measurement errors , the variance of is ) .this results in slower rates of convergence of kernel regression estimators in presence of measurement errors than those in the error - free case , and the value of is a key parameter that controls the difficulty of estimating , namely , the larger the value of is , the more difficult estimation of will be . in other words , the value of quantifies the degree of ill - posedness of estimation of . condition ( vii ) restricts the bandwidth and the sample size from the measurement error distribution .the second condition in ( [ eq : bandwidth ] ) allows to be of smaller order than , which in particular covers the panel data setup discussed in example [ ex : panel ] .the last condition in ( [ eq : bandwidth ] ) means that we are choosing undersmoothing bandwidths , that is , choosing bandwidths that are of smaller order than optimal rates for estimation of .inspection of the proof of theorem [ thm : gaussian approximation ] shows that without the last condition in ( [ eq : bandwidth ] ) , we have that where the term comes from the deterministic bias .so , choosing optimizes the rate on the right hand side , and the resulting rate of convergence of is .the last condition in ( [ eq : bandwidth ] ) requires to choose of smaller order than ( by factors ) , so that the `` variance '' term dominates the bias term .we will later discuss the problem of bias after presenting the theorems ( see remark [ rem : bias ] ) .for condition ( vii ) to be non - void , we require .we first state a theorem that establishes that , under assumption [ as : mean ] , the distribution of , where is defined in ( [ eq : deviation process ] ) , can be approximated by that of the supremum of a certain gaussian process , which is a building block for proving validity of the proposed confidence band .recall that a gaussian process indexed by is a tight random variable in if and only if is totally bounded for the intrinsic pseudo - metric } ] .we use the kernel function defined by its fourier transform given by where and ( cf . * ? ? ?* ; * ? ? ?the function is infinitely differentiable with support 1,429 higher than those of normal weight .while there is an extensive body of literature on cost estimation of obesity , it is a limitation that commonly used data sets contain only self - reported body measures , and hence the values of bmi generated from them are prone to biases .more recently , use the instrumental variable approach to address this issue in cost estimation of obesity . in this section ,we employ our data combination approach to treat the self - reporting errors , and draw confidence bands for nonparametric regressions of medical costs on bmi .we focus on costs measured by medical expenditures .with this said , we note that there are also indirect costs of obesity which we do not account for , e.g. , the costs of obesity are known to be passed on to obese workers with employer - sponsored health insurance in the form of lower cash wages and labor market discrimination against obese job seekers by insurance - providing employers see also .details of the two data sets which we combine are as follows .the national health and nutrition examination survey ( nhanes ) of cdc contains data of survey responses , medical examination results , and laboratory test results .the survey responses include demographic characteristics , such as gender and age .in addition to the demographic characteristics , the survey responses also contain self - reported body measures and self - reported health conditions . among the self reported body measures are height in inches and weight in pounds .these two variables allow us to construct the bmi in lbs / in as a generated variable .we convert this unit into the metric unit ( kg / m ) .the nhanes also contains medical examination results , including clinically measured bmi in kg / m .we treat the bmi constructed from the self - reported body measures as , and the clinically measured bmi as . from the nhanes as a validation data set of size , we can compute for each .the panel survey of income dynamics ( psid ) is a longitudinal panel survey of american families conducted by the survey research center at the university of michigan .this data set contains a long list of variables including demographic characteristics , socio - economic attributes , expenses , and health conditions , among others .in particular , the psid contains self - reported body measures of the household head , including height in inches and weight in pounds .these two variables allow us to construct the body mass index ( bmi ) in lbs / in as a generated variable .again , we convert this unit into the metric unit ( kg / m ) .the psid also contains medical and prescription expenses .we treat the bmi constructed from the self - reported body measures as , and the medical and prescription expenses as .we note that the information contained in the psid are mostly at the household level , as opposed to the individual level , and thus indicates the total medical and prescription expenses of household . to focus on the individual medical and prescription expenses rather than household expenses , we only consider the sub - sample of the households of single men with no dependent family , for which the total medical and prescription expenses of the household equal to the individual medical and prescription expenses of the household head . hence , the reported regression results concern these selected subpopulations . combining the nhanes of size and the psid of size , we obtain the generated data to which we can apply our method in order to draw confidence bands for the regression function of the model with = 0 ] as the interval on which we draw confidence bands .this interval has 25 ( the who cut - off point for overweight ) as the midpoint , and is contained in the convex hull of the empirical support of .the kernel function and the bandwidth rule carry over form our simulation studies .the sequence used for bandwidth choice is defined by following the recommendation which we made from our simulation results . to account for the different medical conditions across ages , we categorize the sample into the following subsamples : ( a ) male individuals aged 2034 , ( b ) male individuals aged 3549 , ( c ) male individuals aged 5064 , and ( d ) male individuals aged 65 or above .note that this stratification takes into account the fact that 64 and 65 make the cutoff of medicare eligibility , and hence that group ( d ) faces different expenditure schedules and different economic incentives of health care utilization from groups ( a)(c ) see .after deleting observations with missing fields from the nhanes 2009 - 2010 , we obtain the following sample sizes of these four subsamples : ( a ) , ( b ) , ( c ) , and ( d ) . after deleting observations with missing fields from the psid 2009 for total medical expenses as the dependent variable , we obtain the following sample sizes of these four subsamples : ( a ) , ( b ) , ( c ) , and ( d ) . similarly , after deleting observations with missing fields from the psid 2009 for prescription expenses as the dependent variable , we obtain the following sample sizes of these four subsamples : ( a ) , ( b ) , ( c ) , and ( d ) . note that we use similar survey periods around 2009 for both the nhanes and psid to remove potential time effects .figure [ fig : application_medical ] displays estimates and confidence bands for total medical expenses in 2009 us dollars as the dependent variable .figure [ fig : application_prescription ] similarly displays estimates and confidence bands for prescription expenses in 2009 us dollars as the dependent variable . in both figures , the estimates are indicated by solid black curves .the areas shaded by gray - scaled colors indicate 80% , 90% , and 95% confidence bands .the four parts of the figure represent ( a ) men aged from 20 to 34 , ( b ) men aged from 35 to 49 , ( c ) men aged from 50 to 64 , and ( d ) men aged 65 or above .we see that the levels of both total medical expenses and prescription expenses tend to increase in age , as expected . for the groups ( a)(b ) of young men , both total medical expenses and and prescription expenses exhibit little partial correlation with bmi . for the group ( c ) of middle aged men , on the other hand , the relations turn into positive ones . for the group ( d ) of senior men , total medical expenses and bmi continue to have a positive relationship , but prescription expenses exhibit little partial correlation with bmi .if we look at the 90% confidence band for the group ( c ) of men aged from 50 to 64 , annual average total medical expenses are approximately 17,015 if bmi , approximately 18,119 if bmi , and approximately 21,934 if bmi . likewise , annual average prescription expenses are approximately 636 if bmi , approximately 761 if bmi , and approximately 951 if bmi .these concrete numbers illustrate that confidence bands are useful to make interval predictions of incurred average costs , and this convenient feature has practical values added to the existing methods which only allow for reporting estimates with unknown extents of uncertainties .the results of the present paper can be used for specification testing of the regression function .specification testing in eiv models is important since nonparametric estimation of a regression function has slow rates of convergence , even slower than standard error - free nonparametric regression , while correct specification of a parametric model enables us to estimate the regression function with faster rates , often of oder .suppose that we want to test whether the regression function belongs to a parametric class where is a subset of a metric space ( in most cases a euclidean space ) .popular specifications of include linear and polynomial functions . in caseswhere is linear or polynomial , it is possible to estimate the coefficients with -rate under suitable regularity conditions .suppose now that for some and can be estimated by with a sufficiently fast rate , i.e. , , and that assumption [ as : mean ] is satisfied with .then it is not difficult to see from the proof of theorem [ thm : validity of mb ] that uniformly in , so that therefore , the test that rejects the hypothesis that for some if for some is asymptotically of level .we summarize the above discussion as a corollary .suppose that for some where is a subset of a metric space , and that assumption [ as : mean ] is satisfied with .let be any estimator of such that ; then .the literature on specification testing for eiv regression is large .see , , , , , and references therein .however , none of those papers considers -based specification tests . in practical applications, we may have additional regressors , possibly vector valued , without measurement errors .suppose that we are interested in estimation and making inference on ] , and is independent from conditionally on . in principle, the analysis can be reduced to the case where there are no additional regressors by conditioning on .if is discretely distributed with finitely many mass points , then , where is a mass point , can be estimated by using only observations for which .if is continuously distributed , then can be estimated by using observations for which is `` close '' to , which can be implemented by using kernel weights .however , the detailed analysis of this case is not presented here for brevity .the techniques used to derive confidence bands for the conditional mean in eiv regression can be extended to the conditional distribution function .suppose now that we are interested in constructing confidence bands for the conditional distribution function on a compact rectangle where and are compact intervals , and where we do not observe but instead observe with ( measurement error ) being independent of .as before , we assume that in addition to an independent sample on , there is an independent sample from the measurement error distribution .since ] , is integrable on . furthermore , | dt < \infty ] . ( vii ) condition ( vii ) in assumption [ as : mean ] .[ thm : cdf ] under assumption [ as : cdf ] , as , .furthermore , the supremum width of the band is . to the best of our knowledge ,theorem [ thm : cdf ] is also a new result .in this paper , we develop a method to construct uniform confidence bands for nonparametric eiv regression function .we consider the practically relevant case where the distribution of the measurement error is unknown .we assume that there is an independent sample from the measurement error distribution , where the sample from the measurement error distribution need not be independent from the sample on response and predictor variables .such a sample from the measurement error distribution is available if there is , for example , either 1 ) validation data or 2 ) repeated measurements ( panel data ) on the latent predictor variable with measurement errors , one of which is symmetrically distributed .we establish asymptotic validity of the proposed confidence band for ordinary smooth measurement error densities , showing that the proposed confidence band contains the true regression function with probability approaching the nominal coverage probability . to the best of our knowledge ,this is the first paper to derive asymptotically valid uniform confidence bands for nonparametric eiv regression .we also propose a practical method to choose an undersmoothing bandwidth for valid inference .simulation studies verify the finite sample performance of the proposed confidence band . finally , we discuss extensions of our results to specification testing , cases with additional regressors without measurement errors , and confidence bands for conditional distribution functions .in this section , we collect technical tools that will be used in the proofs of theorems [ thm : gaussian approximation ] and [ thm : validity of mb ] .the proofs rely on modern empirical process theory . for a probability measure on a measurable space and a class of measurable functions on such that , let denote the -covering number for with respect to the -seminorm .the class is said to be pointwise measurable if there exists a countable subclass such that for every there exists a sequence with pointwise .a function is said to be an envelope for if for all .see section 2.1 in for details .[ lem : maximal inequality ] let be i.i.d .random variables taking values in a measurable space , and let be a pointwise measurable class of ( measurable ) real - valued functions on with measurable envelope .suppose that there exist constants and such that where is taken over all finitely discrete distributions on . furthermore , suppose that < \infty ] . define } ] for all for some . then \leq n^{1/r } \max_{1 \leq j \leq n } ( { \mathrm{e}}[|\zeta_{j}|^{r } ] ) ^{1/r}.\ ] ] this inequality is well known , and follows from jensen s inequality . indeed , \leq ( { \mathrm{e}}[\max_{1 \leq j \leq n}|\zeta_{j}|^{r}])^{1/r } \leq ( \sum_{j=1}^{n } { \mathrm{e}}[|\zeta_{j}|^{r}])^{1/r } \leq n^{1/r } \max_{1 \leq j \leq n } ( { \mathrm{e}}[|\zeta_{j}|^{r } ] ) ^{1/r} ] for all .then for any , ).\ ] ] see corollary 2.1 in ; see also theorem 3 in . in what follows , we always assume assumption [ as : mean ] . before proving theorem [ thm : gaussian approximation ] , we first prove some preliminary lemmas .recall that ] .( i ) . since = { \mathrm{e } } [ \ { g(x ) + u \ } e^{it(x+\varepsilon ) } ]= \psi_{x}(t ) \varphi_{\varepsilon}(t) ] for all , so that by the taylor expansion , for any , for some ]uniformly in , it suffices to show that \gtrsim ( 1-o(1 ) ) h_{n}^{-2\alpha+1}.\ ] ] observe that f_{w}(w ) = \left ( ( gf_{x})*f_{\varepsilon } \right ) ( w) ] , and are bounded and continuous on , and is bounded and continuous on , we have that the function is bounded and continuous on . in particular, since for all under our assumption , we have that . now , observe that & = \int_{{\mathbb{r } } } v(x , w ) k_{n}^{2}((x - w)/h_{n } ) dw \\ & = h_{n } \int_{{\mathbb{r } } } v(x , x - h_{n}w ) k_{n}^{2}(w ) dw .\end{aligned}\ ] ] furthermore , we have that by plancherel s theorem .hence , it suffices to show that from the proof of lemma 3 in , we have that . by the definition of , for any , there exists sufficiently small such that for all whenever .therefore , ( iii ) .pick any . since , we have that & = h_{n } \int_{{\mathbb{r } } } { \mathrm{e } } [ |y|^{2+\ell } \mid w = x - h_{n}w ] |k_{n } ( w)|^{2+\ell } f_{w}(x - h_{n}w)dw \\& \lesssim h_{n}^{-\ell \alpha+1 } \int_{{\mathbb{r } } } v_{\ell}(x - h_{n}w)k_{n}^{2}(w ) dw \\ & \leq h_{n}^{-\ell \alpha + 1 } \| v_{\ell } \|_{{\mathbb{r } } } \int_{{\mathbb{r } } } k_{n}^{2}(w ) dw \lesssim h_{n}^{-(2+\ell)\alpha+1},\end{aligned}\ ] ] where {w}(w) ] .see lemma 4 in ; see also theorem 4.1 in .consider the following classes of functions in view of the fact that and , choose constants ( independent of ) such that and .let note that is an envelope function for for each .[ lem : entropy ] there exist constants independent of such that for all , where is taken over all finitely discrete distributions on .consider the following classes of functions lemma 1 in and corollary a.1 in yield that there exist constants independent of such that and for all .in what follows , we only prove ( [ eq : entropy bound ] ) for ; the proofs for the other cases are completely analogous given the above bounds on the covering numbers for and .let , and observe that , since , there exist constants independent of such that for all , where is an envelope function for .this can be verified by a direct calculation , or observing that is a vc subgraph class with vc index at most ( cf .* lemma 2.6.15 ) , and applying theorem 2.6.7 in .let , and note that . from corollary a.1 in , there exist constants independent of such that for all .now , the desired result follows from the observation that for all .we have \|_{{\mathbb{r } } } = o_{{\mathrm{p } } } \ { h_{n}^{-\alpha } ( nh_{n})^{-1/2 } \sqrt{\log ( 1/h_{n } ) } \} ] .furthermore , \|_{{\mathbb{r } } } = o_{{\mathrm{p } } } \ { h_{n}^{-\alpha } ( nh_{n})^{-1/2 } \sqrt{\log ( 1/h_{n } ) } \}.\ ] ] the first two results are implicit in the proofs of corollaries 1 and 2 in .to prove the last result , we shall apply lemma [ lem : maximal inequality ] to the class of functions . from lemma [lem : moment bound]-(iii ) , we have that = o(h_{n}^{-2\alpha+1}) ] , so that we have \|_{{\mathbb{r } } } ] & \lesssim h_{n}^{-\alpha } \sqrt{nh_{n}\log ( 1/h_{n } ) } + h_{n}^{-\alpha}n^{1/4 } \log ( 1/h_{n } ) \\ & \lesssim h_{n}^{-\alpha } \sqrt{nh_{n}\log ( 1/h_{n})},\end{aligned}\ ] ] where the second inequality follows from the first condition in ( [ eq : bandwidth ] ) .this completes the proof .we are now in position to prove theorem [ thm : gaussian approximation ] .we divide the proof into two steps . *step 1*. let .we first prove that + o_{{\mathrm{p}}}(r_{n})\ ] ] uniformly in . to this end, we shall show that .first , observe from lemma [ lem : empirical chf ] that let = { \mathrm{e } } [ \ { g(x ) + u \ } e^{it(x+\varepsilon ) } ] = \psi_{x}(t ) \varphi_{\varepsilon}(t) ] , so that } \left | \frac{{\widehat}{\psi}_{yw}(t)}{\psi_{yw}(t ) } - 1 \right |^{2 } |\psi_{x}(t)|^{2 } dt \right ] \lesssim n^{-1 } \int_{-h_{n}^{-1}}^{h_{n}^{-1 } } \frac{1}{| \varphi_{\varepsilon}(t)|^{2 } } dt \lesssim h_{n}^{-2\alpha } ( nh_{n})^{-1}.\ ] ] finally , for any with , we have , so that } | { \widehat}{\psi}_{yw}(t ) |^{2 } dt \right ] \lesssim ( nh_{n})^{-1}.\ ] ] therefore , we have . from step 2 in the proof of theorem 1 of , it follows that , which in particular implies that so that .furthermore , \|_{i } + \| { \widehat}{\mu}^ { * } ( \cdot)- { \mathrm{e } } [ { \widehat}{\mu}^{*}(\cdot ) ] \|_{i } \lesssim \int_{{\mathbb{r } } } |\psi_{x}(t)| dt + o_{{\mathrm{p}}}(1 ) = o_{{\mathrm{p}}}(1) ] for from lemma [ lem : moment bound]-(iii ) , so that \lesssim h_{n}^{-\ell/2} ] .therefore , applying theorem 2.1 in to with and , yields that there exists a random variable having the same distribution as such that now , for , define and observe that is a tight gaussian random variable in with mean zero and the same covariance function as such that has the same distribution as .since , there exists a sequence such that ( which follows from the fact that the ky fan metric metrizes convergence in probability ; see theorem 9.2.2 in ) .observe that for any , the anti - concentration inequality for the supremum of a gaussian process ( lemma [ lem : anti - concentration ] ) then yields that \}.\ ] ] from the covering number bound for given in lemma [ lem : entropy ] , together with the facts that \lesssim h_{n}^{-1} ] is bounded ( in absolute value ) by , \|_{i } \\& \quad \leq h_{n } ( \| gf_{x } \|_{{\mathbb{r } } } + \| g \|_{i } \| f_{w } \|_{{\mathbb{r } } } ) \int_{{\mathbb{r } } } k_{n}^{2}(w ) dw \lesssim h_{n}^{-2\alpha+1}.\end{aligned}\ ] ] hence , \|_{i}}_{=o(h_{n}^{-2\alpha+1 } ) } \\ & \quad + \left \| \frac{1}{n } \sum_{j=1}^{n } \ { y_{j}-g(\cdot ) \ } k_{n}^{2}((\cdot - w_{j})/h_{n})-{\mathrm{e}}[\ { y - g(\cdot ) \ } k_{n}^{2}((\cdot - w)/h_{n } ) ] \right \|_{i}.\end{aligned}\ ] ] the second term on the right hand side is identical to \ } \right \|_{{\mathcal{f}}_{n}^{(3)}}.\ ] ] in view of the covering number bound for given in lemma [ lem : entropy ] , together with theorem 2.14.1 in , the expectation of the last term is \}^{2 } ] } \lesssim h_{n}^{-2\alpha}n^{-1/2}.\ ] ] therefore , the right hand side on ( [ eq : variance bound 1 ] ) is which is .hence , since , we have uniformly in . since , it remains to prove that \right \|_{i } \\ & = \left \| \frac{1}{n } \sum_{j=1}^{n } \{ f(y_{j},w_{j } ) - { \mathrm{e}}[f(y , w ) ] \ } \right \|_{{\mathcal{f}}_{n}^{(4 ) } } \end{aligned}\ ] ] is . in view of the covering number bound for given in lemma [ lem : entropy ] , together with theorem 2.14.1 in ,the expectation of the last term is \}^{2 } ] } \lesssim h_{n}^{-1}n^{-1/2 } = o\ { ( \log ( 1/h_{n}))^{-1 } \}.\ ] ] this completes the proof .we divide the proof into several steps .* step 1*. define \end{aligned}\ ] ] for .we first prove that to this end , we shall apply theorem 2.2 in to .let then applying theorem 2.2 in to with and , yields that there exists a random variable of which the conditional distribution given is identical to the distribution of , and such that which shows that there exists a sequence such that since , we have uniformly in , and the anti - concentration inequality for the supremum of a gaussian process ( lemma [ lem : anti - concentration ] ) yields that uniformly in .likewise , we have uniformly in .* step 2*. in view of the proof of step 1 , in order to prove the result ( [ eq : validity of mb ] ) , it is enough to prove that . to this end , define for , and we first prove that we begin with noting that \ } - h_{n}g(x ) \ { { \widehat}{f}_{x}^ { * } ( x ) - { \mathrm{e } } [ { \widehat}{f}_{x}^{*}(x ) ] \ } + a_{n}(x ) \\ & = o_{{\mathrm{p } } } \ { h_{n}^{-\alpha+1 } ( nh_{n})^{-1/2}\sqrt{\log ( 1/h_{n } ) } \}\end{aligned}\ ] ] uniformly in , so that it suffices to verify that \right \|_{i}\ ] ] is .since , the last term is step 2 in the proof of theorem 2 in shows that . for the second term ,observe that so that . for the first term , observe that so that .hence we have proved ( [ eq : intermediate ] ) .note that the result of step 1 and the fact that = o(\sqrt{\log ( 1/h_{n})}) ] and the function class is a vc class .hence we omit the detail for brevity ..simulated uniform coverage probabilities of by estimated confidence bands in $ ] under normally distributed with and laplace distributed .alternative sequences are used for bandwidth selection procedure .the simulated probabilities are computed for each of the three nominal coverage probabilities , 80% , 90% , and 95% , based on 2,000 monte carlo iterations . [ cols="^,^,^,^,^,^,^,^,^,^,^,^ " , ] 99 armstrong , t. and kolsr , m. ( 2014 ) . a simple adjustment for bandwidth snooping .babii , a. ( 2016 ) .honest confidence sets in nonparametric iv regression and other ill - posed models .bhattacharya , j. and bundorf , m.k .( 2009 ) . the incidence of the healthcare costs of obesity ._ j. health econ ._ * 28 * 649 - 658 .bickel , p. and rosenblatt , m. ( 1973 ) . on some global measures of the deviations of density function estimates .statist . _* 1 * 1071 - 1095 .correction ( 1975 ) * 3 * 1370 .birke , m. , bissantz , n. , and holzmann , h. ( 2010 ) .confidence bands for inverse regression models . _ inverse problems _ * 26 * 115020 .bissantz , n. , dmbgen , l. , holzmann , h. , and munk , a. ( 2007 ) .non - parametric confidence bands in deconvolution density estimation ._ j. r. stat .methodol . _ * 69 * 483 - 506 .bissantz , n. and holzmann , h. ( 2008 ) .statistical inference for inverse problems ._ inverse problems _ * 24 * 034009 .bohnomme , s. and robin , j .-generalized nonparametric deconvolution with an application to earnings dynamics .econ . stud . _ * 77 * 491 - 533 .blundell , r. , chen , x. , and kristensen , d. ( 2007 ) .semi - nonparametric iv estimation of shape - invariant engel curves ._ econometrica _ * 75 * 1613 - 1669 . bound , j. , brown , c. , and mathiowetz , n. ( 2001 ) .measurement error in survey data . in : _ handbook of econometrics _vol.5 ( eds . j.j . heckman and e.f .leamer ) elsevier pp .3705 - 3843 .calonico , s. , cattaneo , m.d . , and farrell , m.h .( 2015 ) . on the effect of bias estimation on coverage accuracy in nonparametric inference .carrasco , m. , florens , j .-renault , e. ( 2007 ) .linear inverse problems in structural econometrics : estimation based on spectral decomposition and regularization . in : _ handbook of econometrics _vol.6 ( eds . j.j . heckman and e.e .leamer ) elsevier pp .5633 - 5751 .card , d. , dobkin , c. , and maestas , n. ( 2008 ) the impact of nearly universal insurance coverage on health care utilization : evidence from medicare .rev . _ * 98 * 2242 - 2258 .carroll , r.j . and hall , p. ( 1988 ) .optimal rates of convergence for deconvolving a density ._ j. amer .assoc . _ * 83 * 1184 - 1186 .carroll , r.j . ,maca , j.d . , and ruppert , d. ( 1999 ) .nonparametric regression in the presence of measurement error ._ biometrika _ * 86 * 541 - 554 .carroll , r.j ., ruppert , d. , stefanski , l.a . , and crainiceanu , c.m ._ measurement error in nonlinear models : a modern perspective ( 2nd edition)_. chapman & hall / crc .cavalier , l. ( 2008 ) .nonparametric statistical inverse problems . _ inverse problems _ * 24 * 034004 cawley , j. ( 2004 ) . the impact of obesity on wages . _ j. hum ._ * 39 * 451 - 474 .cawley , j. and meyerhoefer , c. ( 2012 ) .the medical care costs of obesity : an instrumental variables approach . _j. health econ . _* 31 * 219 - 230 .chan , l. k. and mak , t.k .( 1985 ) . on the polynomial functional relationship .methodol . _ * 47 * 510 - 518 .chen , x. ( 2007 ) .large sample sieve estimation of semi - nonparametric models .in : _ handbook of econometrics _vol.6 ( eds . j.j .heckman and e.e .leamer ) elsevier pp .5549 - 5632 .chen , x. and christensen , t. ( 2015 ) .optimal sup - norm rates , adaptivity and inference in nonparametric instrumental variables estimation .chen x , hong , h , and nekipelov , d. ( 2011 ) .nonlinear models of measurement errors ._ j. econ . lit . _* 49 * 901 - 937 .chen , x. , hong , h. , and tamer , e. ( 2005 ) .measurement error models with auxiliary data .stud . _ * 72 * 343 - 366 .chen , x. and reiss , m. ( 2011 ) on rate optimality for ill - posed inverse problems in econometrics ._ econometric theory _ * 27 * 497 - 521 .cheng , c .-kukush , a.g .( 2004 ) a goodness - of - fit test for a polynomial eiv model ._ ukrainian mathematical journal _ * 56 * 641 - 661 .cheng , c .-l . and schneeweiss , h. ( 1998 ) .polynomial regression with errors in the variables . _methodol . _ * 60 * 189 - 199 .chernozhukov , v. , chetverikov , d. , and kato , k. ( 2014a ) .gaussian approximation of suprema of empirical processes ._ * 42 * 1564 - 1597 .chernozhukov , v. , chetverikov , d. , and kato , k. ( 2014b ) . anti - concentration and honest , adaptive confidence bands .statist . _* 42 * 1787 - 1818 .chernozhukov , v. , chetverikov , d. , and kato , k. ( 2015 ) .comparison and anti - concentration bounds for maxima of gaussian random vectors ._ probab . theory related fields _ * 162 * 47 - 70 .chernozhukov , v. , chetverikov , d. , and kato , k. ( 2016 ) .empirical and multiplier bootstraps for suprema of empirical processes of increasing complexity , and related gaussian couplings ._ stochastic process ._ , to appear .chernozhukov , v. , lee , s. , and rosen , a. ( 2013 ) .intersection bounds : estimation and inference ._ econometrica _ * 81 * 667 - 737 .comte , f. and kappus , j. ( 2015 ) .density deconvolution from repeated measurements without symmetry assumption on the errors ._ j. multivariate anal . _ * 140 * 21 - 46 .comte , f. and lacour , c. ( 2011 ) .data - driven density estimation in the presence of additive noise with unknown distribution. _ j. r. stat .methodol . _ * 73 * 601 - 627 .delaigle , a. , fan , j. , and carroll , r.j . ( 2009 ) . a design - adaptive local polynomial estimator for the errors - in - variables problem ._ j. amer .assoc . _ * 104 * 348 - 359 .delaigle , a. and hall , p. ( 2008 ) .using simex for smoothing - parameter choice in errors - in - variables problems ._ j. amer .assoc . _ * 103 * 280 - 287 .delaigle , a. , hall , p. , and jamshidi , f. ( 2015 ) .confidence bands in nonparametric errors - in - variables regression ._ j. r. stat .methodol . _ * 77 * 149 - 169 .delaigle , a. , hall , p. , and meister , a. ( 2008 ) .on deconvolution with repeated measurements .statist . _* 36 * 665 - 685 .delaigle , a. and meister , a. ( 2007 ) .nonparametric regression estimation in the heteroscedastic errors - in - variables problem ._ j. amer .* 102 * 1416 - 1426 .diggle , p.j . and hall , p. ( 1993 ) . a fourier approach to nonparametric deconvolution of a density estimate .. b. stat ._ * 55 * 523 - 531 .dudley , r.m ._ real analysis and probability_. cambridge university press .efromovich , s. ( 1997 ) .density estimation for the case of supersmooth measurement error ._ j. amer ._ * 92 * 526 - 535 .einav , l. , leibtag , e. , and nevo , a. ( 2010 ) .recording discrepancies in nielsen homescan data : are they present and do they matter ?_ * 8 * 207 - 239 .van es , b. and gugushvili , s. ( 2008 ) .weak convergence of the supremum distance for supersmooth kernel deconvolution . _ statist .lett . _ * 78 * 2932 - 2938 .eubank , r.l . andspeckman , p.l .confidence bands in nonparametric regression ._ j. amer .assoc . _ * 88 * 1287 - 1301 .fan , j. ( 1991a ) .on the optimal rates of convergence for nonparametric deconvolution problems ._ * 19 * 1257 - 1272. fan , j. ( 1991b ) .asymptotic normality for deconvolution kernel density estimators ._ sankhya a _ * 53 * 97 - 110 .fan , j. and masry , e. ( 1992 ) .multivariate regression with errors - in - variables : asymptotic normality for mixing processes ._ j. multivariate anal . _ * 43 * 237 - 271 .fan , j. and truong , y.k .nonparametric regression with errors in variables ._ * 21 * 1900 - 1925 .finkelstein , e.a ., trogdon , j.g . , cohen , j.w . , and dietz , w. ( 2009 ) .annual medical spending attributable to obesity : payer- and service - specific estimates ._ health aff ._ * 28 * w822-w831 .folland , g.b ._ real analysis ( 2nd edition)_. wiley .fuller , w.a ._ measurement error models_. wiley .gin , e. and nickl , r. ( 2016 ) ._ mathematical foundations of infinite - dimensional statistical models_. cambridge university press .hall , p. ( 1991) . on convergence rates of suprema ._ probab . theory related fields _ * 89 * 447 - 455 .hall , p. and horowitz , j.l .nonparametric methods for inference in the presence of instrumental variables .statist . _* 33 * 2904 - 2929 . hall , p. and horowitz , j.l .a simple bootstrap method for constructing nonparametric confidence bands for functions ._ * 41 * 1892 - 1921. hall , p. and ma , y. ( 2007 ) .testing the suitability of polynomial models in error - in - variables problems .statist . _* 35 * 2620 - 2638 .hausman , j.a . ,newey , w.k . ,ichimura , h. , and powell , j.l .identification and estimation of polynomial errors - in - variables models ._ j. econometrics _ * 50 * 273 - 295 .horowitz , j.l ._ semiparamtric and nonparametric methods in econometrics_. springer .horowitz , j.l .applied nonparametric instrumental variables estimation ._ econometrica _ * 79 * 347 - 394 .horowitz , j. l. and lee , s. ( 2012 ) .uniform confidence bands for functions estimated nonparametrically with instrumental variables ._ j. econometrics _ * 168 * 175 - 188 .horowitz , j.l . andmarkatou , m. ( 1996 ) .semiparametric estimation of regression models for panel data .stud . _ * 63 * 145 - 168 .hu , y. and sasaki , y. ( 2015 ) .closed - form estimation of nonparametric models with non - classical measurement errors. _ j. econometrics _ * 185 * 392 - 408 .johannes , j. ( 2009 ) .deconvolution with unknown error distribution .statist . _* 37 * 2301 - 2323 .kato , k. and sasaki , y. ( 2016 ) .uniform confidence bands in deconvolution with unknown error distribution .komls , j. , major , p. , and tusndy , g. ( 1975 ) .an approximation for partial sums of independent rv s and the sample df i. _ z. warhsch .gabiete _ * 32 * 111 - 131 .kotlarski , i. ( 1967 ) . on characterizing the gamma and the normal distribution ._ * 20 * 69 - 76 .li , t. and vuong , q. ( 1998 ) .nonparametric estimation of the measurement error model using multiple indicators. _ j. multivariate anal . _* 65 * 139 - 165 .lounici , k. and nickl , r. ( 2011 ) .global uniform risk bounds for wavelet deconvolution estimators .statist . _* 39 * 201 - 231 .mcmurry , t.l . and politis , d.n .nonparametric regression with infinite order flat - top kernels ._ j. nonparametric statist . _* 16 * 549 - 562 .meister , a. ( 2009 ) . _ deconvolution problems in nonparametric statistics_. springer .neumann , m.h .on the effect of estimating the error density in nonparametric deconvolution ._ j. nonparametric statist . _* 7 * 307 - 330 .neumann , m.h .deconvolution from panel data with unknown error distribution ._ j. multivariate anal . _ * 98 * 1955 - 1968 .neumann , m.h . andpolzehl , j. ( 1998 ) .simultaneous bootstrap confidence bands in nonparametric regression . _ j. nonparametric statist . _ * 9 * 307 - 333 .neumann , m.h . and rei , m. ( 2009 ) .nonparametric estimation for lvy processes from low - frequency observations. _ bernoulli _ * 15 * 223 - 248 .newey , w. and powell , j. ( 2003 ) .instrumental variables estimation of non - parametric models. _ econometrica _ * 71 * 1565 - 1578 .ogden , c.l . , carroll , m.d . , fryar , c.d . , and flegal , k.m .prevalence of obesity among adults and youth : united states , 2011 - 2014 ._ nchs data brief _ * 219*. hyattsville , md : national center for health statistics .otsu , t. and taylor , l. ( 2016 ) .specification testing for errors - in - variables models .proksh , k. , bissantz , n. , and dette , h. ( 2015 ) .confidence bands for multivariate and time dependent inverse regression models ._ bernoulli _ * 21 * 144 - 175 .rao , b.l.s.p .( 1992 ) _ identifiability in stochastic models : characterization of probability distributions ._ academic press .ridder , g. and moffitt , r. ( 2007 ) .the econometrics of data combination ._ handbook of econometrics _ * 6b * 5469 - 5547 .rio , e. ( 1994 ) .local invariance principles and their application to density estimation ._ probab . theory related fields _ * 98 * 21 - 45 .sato , k .-( 1999 ) . _ lvy processes and infinitely divisible distributions_. cambridge university press .schennach , s.m .nonparametric regression in the presence of measurement error ._ econometric theory _ * 20 * 1046 - 1093 .schennach , s.m .a bias bound approach to nonparametric inference .cemmap working paper cwp71/15 .schennach , s.m .recent advances in the measurement error literature . in : _ annual review of economics _ , vol .341 - 377 .schennach , s.m . andhu , y. ( 2013 ) . nonparametric identification and semiparametric estimation of classical measurement error models without side information . _ j. amer . stat . assoc . _ * 108 * 177 - 186 .schennach , s.m ., white h , and chalak , k. ( 2012 ) .local indirect least squares and average marginal effects in nonseparable structural systems ._ j. econometrics _ * 166 * 282 - 302 .schmidt - hieber , j. , munk , a. , and dmbgen , l. ( 2013 ) .multiscale methods for shape constraints in deconvolution : confidence statements for qualitative features .statist . _* 41 * 1299 - 1328 .smirnov , n.v .( 1950 ) . on the construction of confidence regions for the density of distribution of random variables ._ doklady akad .nauk sssr _ * 74 * 189 - 191 ( russian ) .song , w .- x .( 2008 ) model checking in errors - in - variables regression ._ j. multivariate anal . _ * 99 * 2406 - 2443 .stefanski , l. and carroll , r.j .deconvoluting kernel density estimators ._ statistics _ * 21 * 169 - 184 .van der vaart , a.w . and wellner , j.a ._ weak convergence and empirical processes : with applications to statistics_. springer .wasserman , l. ( 2006 ) ._ all of nonparamertric statistics_. springer .xia , y. ( 1998 ) .bias - corrected confidence bands in nonparametric regression ._ j. r. stat .methodol . _ * 60 * 797 - 811 .zhu , l .- x . and cui , h .- j .( 2005 ) testing the adequacy for a general linear errors - in - variables model ._ statist sinica _ * 15 * 1049 - 1068 .zhu , l .- x . , song , w .- x . , and cui , h .- j .( 2003 ) testing lack - of - fit for a polynomial errors - in - variables model ._ acta mathematicae applicatae sinica _ * 19 * 353 - 362 .
|
this paper develops a method to construct uniform confidence bands for a nonparametric regression function where a predictor variable is subject to a measurement error . we allow for the distribution of the measurement error to be unknown , but assume that there is an independent sample from the measurement error distribution . the sample from the measurement error distribution need not be independent from the sample on response and predictor variables . the availability of a sample from the measurement error distribution is satisfied if , for example , either 1 ) validation data or 2 ) repeated measurements ( panel data ) on the latent predictor variable with measurement errors , one of which is symmetrically distributed , are available . the proposed confidence band builds on the deconvolution kernel estimation and a novel application of the multiplier ( or wild ) bootstrap method . we establish asymptotic validity of the proposed confidence band under ordinary smooth measurement error densities , showing that the proposed confidence band contains the true regression function with probability approaching the nominal coverage probability . to the best of our knowledge , this is the first paper to derive asymptotically valid uniform confidence bands for nonparametric errors - in - variables regression . we also propose a novel data - driven method to choose a bandwidth , and conduct simulation studies to verify the finite sample performance of the proposed confidence band . applying our method to a combination of two empirical data sets , we draw confidence bands for nonparametric regressions of medical costs on the body mass index ( bmi ) , accounting for measurement errors in bmi . finally , we discuss extensions of our results to specification testing , cases with additional error - free regressors , and confidence bands for conditional distribution functions .
|
studies of networks initially focused on local characteristics or on macroscopic distributions ( of individual nodes and edges ) , but it is now common to consider `` mesoscale '' structures such as communities . indeed , there are numerous notions of community structure in networks .for example , one can define a network s community structure based on a hard or soft partitioning of network into sets of nodes that are connected more densely among themselves than to nodes in other sets , and one can also examine community structure by partitioning edges .one can also determine community structure by taking the perspective of a dynamical system ( e.g. , a markov process ) on a network .see ref . for myriad other notions of community structure , which have yielded insights on numerous systems in biology , political science , sociology , and many other areas .although community structure is the most widely studied mesoscale structure by far , numerous other types exist .these include notions of role similarity and many types of block models .perhaps the most prominent block structure aside from community structure is _ core - periphery structure _ , in which connections between core nodes and other core nodes are dense , connections between core nodes and peripheral nodes are also dense ( but possibly less dense than core - core connections ) , and peripheral nodes are sparsely connected to other nodes .core - periphery structure provides a useful complement for community structure .its origins lie in the study of social networks ( e.g. , in international relations ) , although notions such as `` nestedness '' in ecology also attempt to determine core network components . as with community structure ,there are numerous possible ways to examine core - periphery structure , although this has seldom been explored to date. a few different notions of core - periphery structure have been developed , although there are far fewer of these than there are notions of community structure . in this paper , we contrast two different notions of core - periphery structure the block - model perspective that we discussed above and a recently - developed notion that is appropriate for transportation networks ( and which need not satisfy the density properties of the block - model notion ) by calculating them for several different types of empirical and computer - generated networks . due to the rich variety of types of networks across various areas and disciplines , a wealth of different mesoscale features are possible .we expect a block - model notion of core - periphery structure to be appropriate for social networks , whereas it can be desirable to develop transport - based notions of core - periphery structure for road networks and other transportation networks .however , this intuition does not imply that application - blind notions can not be useful ( e.g. , a recently - developed block - model notion of core - periphery structure was helpful for analyzing the london metropolitan transportation system ) , but it is often desirable for network notions to be driven by applications for further development .this is also the case for community structure , where measures of modularity , conductance , information cost , and partition density ( for communities of edges ) are all useful .core - periphery structure depends on context and application , and it is important to compare different notions of core network components when considering core agents in a social network , core banks in a financial system , core streets and intersections in a road network , and so on .we focus on two different ways of characterizing core - periphery structures in networks : we examine density - based ( or `` structural '' ) coreness using intuition from social networks in which core agents either have high degree ( or strength , in the case of weighted networks ) , are neighbors of nodes with high degree ( or strength ) , or satisfy both properties and we examine transport - based coreness by modifying notions of betweenness centrality . to contrast these different types of core - periphery structure , we compute statistical properties of coreness measures applied to empirical networks , their correlations to each other , and their correlations to other properties of networks .with these calculations , we obtain interesting insights on several social , financial , and transportation networks .an additional contribution of this paper is our extension of the transport - based method in ref . to allow the assignment of a coreness measure to edges ( rather than just nodes ) .such a generalization is clearly important for transportation networks , for which one might want ( or even need ) to focus on edges rather than nodes .the remainder of this paper is organized as follows . in sec .[ sec : methods ] , we discuss the methods that we employ in this paper for studying density - based and transport - based core - periphery structure . we examine some social and financial networks in sec . [ sec : examples_non_transporation_networks ] and several transportation networks in sec .[ sec : examples_transporation_networks ] . to illustrate the effects of spatial embedding on transportation networks, we develop a generative model of roadlike networks in sec . [sec : examples_transporation_networks ] .we conclude in sec .[ sec : discussion ] .conventional definitions of core - periphery organization rely on connection densities among different sets of nodes ( in the form of block models ) or on structural properties such as node degree and strength .one approach to studying core - periphery structure relies on finding a group of core nodes or assigning coreness values to nodes by optimizing an objective function .the method introduced in ref . , which generalizes the basic ( and best known ) formulation in , is particularly flexible .for example , one can detect distinct cores in a network , and one can consider either discrete or continuous measures of coreness .this notion was used recently to examine the roles of brain regions for learning a simple motor task in functional brain networks . in the method of ref . , one seeks to calculate a centrality measure of coreness called a `` core score '' ( cs ) using the adjacency - matrix elements , where , the network has nodes , and the value indicates the weight of the connection between nodes and . for directed networks ( see the discussion below ) , we use to denote the weight of the connection from node to node . when , there is no edge between and .we insert the core - matrix elements into the core quality where the parameter ] determines the fraction of core nodes .we decompose the core - matrix elements into a product form , , where are the elements of a core vector .reference also discusses the use of alternative `` transition functions '' to the one in eq .( [ transition_function ] ) .we wish to determine the core - vector elements in ( [ transition_function ] ) so that the core quality in eq .( [ core_quality ] ) is maximized .this yields a cs for node of where the normalization factor is determined so that the maximum value of cs over the entire set of nodes is 1 . in practice , we perform the optimization using some computational heuristic and some sample of points in the parameter space with coordinates \times [ 0,1] ] if node is in the set that consists of `` optimal backup paths '' from node to node , where we stress that _ the edge _ _ is removed from _ , and = 0 ] .assume that an agent stands at a node and wishes to travel to node .let be the vector from node to node , and let $ ] be the angle between and .a greedy navigator considers the set of neighbors of and it moves to the neighbor that has the smallest , where ties are broken by taking a neighbor uniformly at random among the neighbors with the smallest angle .if all neighbors have been visited , then the navigator goes back to the node that it left to reach .this procedure is repeated until node is reached ( which will happen eventually if is connected , or , more generally , if and belong to the same component ) .it is important at this stage to comment about weight versus `` distance '' in weighted networks . in a weighted network ,a larger weight represents a closer or stronger relation .if we are given such a network ( with weight - matrix elements ) , we construct a distance matrix whose elements are for nonzero and when .we then use the distance matrix to determine the length of a path and in all of our calculations of pss , gsnps , and bcs .alternatively , we might start with a set of network distances or euclidean distances , and then we can use that information directly . in this paper, we will consider transportation networks that are embedded in and .in contrast to css , we can calculate pss for directed networks very naturally simply by restricting ourselves to directed paths .we now examine some social and financial networks , as it is often argued that such networks possess a core - periphery structure . indeed, the intuition behind density - based core - periphery structure was developed from studies of social networks . as discussed in sec .[ sec : transport_cp ] , we highlight an important point for weighted networks . in such networks ,each edge has a value associated with it .we consider data associated with such values that come in one of two forms . in one form , we have a matrix entry for which a larger value indicates a closer ( or stronger ) relationship between nodes and ( where ) . in this case, we have a weighted adjacency matrix whose elements are . in the second form , we have a matrix entry for which a larger value indicates a more distant ( literally , in the case of transportation networks ) or weaker relationship between nodes and ( where ) . in this case , the elements yield a distance matrix , and we calculate weighted adjacency matrix elements using the formula ( for ) and when . [ cols= " < ,< , < , < , < , < " , ] to examine correlations between the coreness measures and bc values in roadlike networks , we generate 2d and 3d roadlike structures from a recently introduced navigability - based model for road networks .we start by determining the locations of nodes either in the unit square ( for 2d roadlike networks ) or in the unit cube ( for the 3d case ) .we then add edges by constructing a minimum spanning tree ( mst ) via kruskal s algorithm .let denote the total ( euclidean ) length of the mst .we then add the shortcut that minimizes the mean shortest path length over all node pairs , and we repeat this step until the total length of the network reaches a certain threshold .( when there is a tie , we pick one shortcut uniformly at random from the set of all shortcuts that minimize the shortest path length . ) our final network is the set of nodes and edges right before the step that would force us to exceed this threshold by adding a new shortcut .reference called this procedure a `` greedy shortcut construction . '' in adding shortcuts , we also apply an additional constraint to emulate real road networks : new edges are not allowed to cross any existing edges .consider a candidate edge ( among all of the possible pairs of nodes without an edge currently between them ) that connects the vectors and .we start by examining the 2d case .suppose that there is an edge ( which exists before the addition of a new shortcut ) that connects and .the equation of intersection , then implies that }{(\delta { \bf p } \times \delta { \bf q})_z}\ , , \quad t \in [ 0,1]\ , , \\ u & = \frac{[({\bf q } - { \bf p } ) \times \delta { \bf p } ] _ z}{(\delta { \bf p } \times \delta { \bf q})_z}\ , \quad u \in [ 0,1]\ , .\end{split } \label{intersection_solution}\ ] ] in eq . , the component ( indicated by the subscripts ) is perpendicular to the plane that contains the network . if eq .has a solution , then intersects with , so is excluded and we try another candidate edge .we continue until we exhaust every pair of nodes that are currently not connected to each other by an edge .we now consider the singular cases , in which the denominator in eq. equals .when , it follows that ( i.e. , they are parallel to each other ) , so they can not intersect ; therefore , is not excluded . when and [ which is equivalent to because implies that , , and are all parallel to each other ] , and are collinear and share infinitely many points , so is excluded from consideration in that case as well now consider the 3d case .the distance between ( the closest points of ) and is thus , if , then it is guaranteed that and do not intersect .again , corresponds to the parallel case , so and can not intersect . if , then the vectors and yield a plane , so we obtain the same solution as in the 2d case , where we replace the component in eq . with the component that lies in the direction perpendicular to the relevant 2d plane .we generate synthetic roadlike networks by placing 100 nodes uniformly at random inside of a unit square ( 2d ) or cube ( 3d ) , and we use a threshold of for the total length of the edges . in fig .[ null_cs_ps ] , we show examples of 2d and 3d roadlike networks . for each embedding dimension , we consider 50 different networks in our ensemble .we consider 50 different initial node locations in each case , but that is the only source of stochasticity ( except for another small source of stochasticity from the tie - breaking rule ) because the construction process itself is deterministic .our main observation from examining these synthetic networks is that correlations of cs values with other quantities ( geodesic ps values , gsnp values , and bc values ) are much larger in the 3d networks than in the 2d networks ( see table [ summarytable_transport ] ) .this suggests that the embedding dimension of the roadlike networks is related to the correlations that we see in coreness ( and betweenness ) measures . to further investigate the effects of the spatial embedding, we compare the results from the 2d generative model with a generative model that is the same except for a modified rule that allows some intersecting edges .as shown in table [ summarytable_transport ] , the correlation values between ps values and geodesic bc values for the modified model are slightly larger than in the original model , though not that many edges cross each other in practice [ see figs .[ null_cs_ps](e ) and ( f ) ] . therefore, although prohibiting edge crossings has some effect on correlations , the fact that most edges can be drawn in the same plane when edge crossings are allowed ( i.e. , the graphs in the modified model are `` almost 2d '' in some sense ) suggests that the dimension in which a network ( or most of a network ) is embedded might have a larger effect on correlations between coreness ( and betweenness ) measures than the edge - crossing rule .in this paper , we examined two types of core - periphery structure one developed using intuition from social networks and another developed using intuition from transportation networks in several networks from a diverse set of applications .we showed that correlations between these different types of structures can be very different in different types of networks .this underscores the fact that it is important to develop different notions of core - periphery structure that are appropriate for different situations .we also illustrated in our case studies that coreness measures can detect important nodes and edges .for roadlike networks , we also examined the effect of spatial embeddedness on correlations between coreness measures .as with the study of community structure ( and many other network concepts ) , the notion of core - periphery structure is context - dependent .for example , we illustrated that the intuition behind what one considers a core road or junction in a road ( or roadlike ) network is different from the intuition behind what one considers to be a core node in a social network .consequently , it is important to develop and investigate ( and examine correlations between ) different notions of core - periphery structure .we have taken a step in this direction through our case studies in this paper , and we also obtained insights in several applications .our work also raises interesting questions .for example , how much of the structure of the rabbit warren stems from the fact that it is embedded in 3d , how much of its structure stems from its roadlike nature , and how much of its structure depends fundamentally on the fact that it was created by rabbits ( but would be different from other roadlike networks that are also embedded in 3d ) ?s.h.l . and m.a.p .were supported by grant no .ep / j001795/1 from the epsrc , and m.a.p .was also supported by the european commission fet - proactive project plexmath ( grant no .m.c . was supported by afosr muri grant no .fa9550 - 10 - 1 - 0569 .some computations were carried out in part using the servers and computing clusters in the complex systems and statistical physics lab ( csspl ) at the korea advanced institute of science and technology ( kaist ) .david lusseau provided the dolphin network data , hannah sneyd and owen gower provided the rabbit - warren data , and davide cellai provided the interbank network data .we thank puck rombach for the code to produce cs values and dan fenn for the code to produce mrfs .we thank simon buckley and john howell for assistance with preparation of the rabbit - warren data .we thank young - ho eom and taha yasseri for helpful comments on the interbank network .finally , we thank roger trout and anne mcbride for their expert opinions on the rabbit warren .dorogovtsev and j.f.f .mendes , adv .51 * , 1079 ( 2002 ) ; s. boccaletti , v. latora , y. moreno , m. chavez , and d .- u .hwang , phys .rep . * 424 * , 175 ( 2006 ) ; m. e. j. newman , _ networks : an introduction _( oxford university press , oxford , u.k . ,2010 ) ; s. wasserman and k. faust , _ social network analysis : methods and applications _( cambridge university press , cambridge , u.k . , 1994 ) .copyright 2013 , networkx developers ( last updated in 2013 ) , graphviz_layout : create node positions using pydot and graphviz .copyright 2010 , networkx developers ( late updated in 2012 ) , draw_networkx_edges : draw the edges of the graph .the rabbit warren was excavated for the purpose of filming a documentary that aired on the bbc .the injection phase was 2224 january 2013 .the excavation phase started on 810 april with mechanical excavation .a mixture of mechanical and hand excavation was done on 1517 april .there was exclusively hand excavation on 2223 april , and finishing touches were applied on 30 april 2013 ( while the documentary was being filmed ) . the simplified rabbit warren data that we used in this paper is available at https://sites.google.com/site/lshlj82/rabbit_warren_data.zip .there are two files : one has the node information , and the other has the edge information .
|
networks often possess mesoscale structures , and studying them can yield insights into both structure and function . it is most common to study community structure , but numerous other types of mesoscale structures also exist . in this paper , we examine core - periphery structures based on both density and transport . in such structures , core network components are well - connected both among themselves and to peripheral components , which are not well - connected to anything . we examine core - periphery structures in a wide range of examples of transportation , social , and financial networks including road networks in large urban areas , a rabbit warren , a dolphin social network , a european interbank network , and a migration network between counties in the united states . we illustrate that a recently developed transport - based notion of node coreness is very useful for characterizing transportation networks . we also generalize this notion to examine core versus peripheral edges , and we show that the resulting diagnostic is also useful for transportation networks . to examine the properties of transportation networks further , we develop a family of generative models of roadlike networks . we illustrate the effect of the dimensionality of the embedding space on transportation networks , and we demonstrate that the correlations between different measures of coreness can be very different for different types of networks .
|
it is well known that the classical heat equation arises from the energy balance law where and are the medium mass density and the medium specific heat respectively , together with the fourier law of heat conduction where is the thermal conductivity .the obtained equation , even if widely used in different fields of the applied science , has the unrealistic feature to imply an infinite speed of heat propagation .different attempts to generalize the classical fourier law have been considered in the literature in order to avoid this unphysical paradox .the first relevant generalization , derived by cattaneo in is based on the following modification of the fourier law of heat conduction that , after substitution in , leads to the so called _ telegraph equation _ the related physical picture is described by the so called _ extended irreversible thermodynamics _for completeness , we observe that is also named in literature maxwell cattaneo law as it was firstly considered by maxwell and also cattaneo vernotte law , since vernotte in considered the paradox of infinite speed of propagation at the same time of cattaneo . a heuristic explanation of equation as a suitable modification of the classical heat flux law , is given by the idea that inertial effects should be considered in models of heat conduction , since the heat flux does not depend instantaneously on the temperature gradient at one point and therefore memory effects are not negligible .in this framework , a more general analysis about the role of memory effects in heat propagation , was then discussed in the classical paper of gurtin and pipkin . in this paper ,the relation between the flux and the gradient of the temperature field is given by this equation describes a general heat flux history model depending by the particular choice of the relaxation kernel . in the case of ] in is invariant under the function .indeed = \frac{2(\gamma+2)}{\gamma^2}(x+c)^{2/\gamma}.\ ] ] + according to the invariant subspace method ( see the appendix ) , we can look for a solution of the form by substituting in , we obtain the claimed result .the main difficulty for this second class of solutions , is to solve exactly the nonlinear fractional equation .this non - trivial problem can not be simply handled by analytical methods , except for the case that leads to a simple linear fractional equation whose solution is well - known ( see ) . with the following proposition ,we provide a more general result , by introducing an _ ad hoc _ non homogeneous term parametrizing a space - time dependent source term in the heat equation .assume the source term given by ^{1/\gamma } \left(\frac{x^2}{t^{2-\nu}}\right)^{1/\gamma}.\ ] ] then , the generalized nonlinear cattaneo equation with source admits as a solution ^{1/\gamma } \left(\frac{x^2}{t^{2-\nu}}\right)^{1/\gamma}.\ ] ] let us consider the following _ ansatz _ on the solution of the equation this assumption is motivated by the fact that the l.h.s .term of equation ( acting on the ) is invariant ( see ) for the function in the sense that , if we call =\frac{\partial}{\partial x}t^{\gamma}\frac{\partial t}{\partial x}+g(x , t),\ ] ] we have that = x^{2/\gamma}\bigg[\frac{2}{\gamma}\left(1+\frac{2}{\gamma}\right)+\frac{c}{t^{\frac{2-\nu}{\gamma}}}\bigg],\ ] ] where ^{1/\gamma}.\ ] ] then , by substituting in we are able to find the unknown function by solving the nonlinear ordinary fractional differential equation (t)= \frac{2}{\gamma}\left(\frac{2+\gamma}{\gamma}\right)f^{\gamma+1}+ c \\frac{\nu-2}{\gamma}\left(\frac{\nu-2}{\gamma}-1\right)t^{\frac{\nu-2}{\gamma}-2}.\ ] ] we search a solution to of the form and by using the following property for ( see ) , we obtain .we observe that , by means of the invariant subspace method ( see and appendix for more details ) , it is possible to prove that equation for particular choices of the power admits solutions in subspaces of dimension 2 or 3 . in particular for , equation admits the subspace , i.e. a solution of the form , where the functions and solve the following system of coupled nonlinear fractional differential equations we stress that also in this case the system is hard to be handled by means of analytical methods .we now consider , as a second interesting case , still in the framework of the fractional calculus approach to nonlinear heat propagation with memory , the case in which the relaxation kernel in is given by the choice corresponds to a generalized cattaneo law discussed by povstenko in . in this case we obtain the following generalized heat equation (x,\tau ) d\tau = j_t^\nu \bigg[\frac{\partial}{\partial x}t^{\gamma } \frac{\partial t}{\partial x}\bigg],\ ] ] where is the fractional riemann - liouville integral .recalling that ( see e.g. ) and applying the fractional derivative in the sense of caputo to both terms of the equality , we obtain observe that the overall order of the time derivative belongs to |x|\leq\lambda r_0 |x|>\lambda r_0 |x|\leq\lambda r_0 |x|>\lambda r_0 ] is a nonlinear differential operator .given linearly independent functions we call the -dimensional linear space this space is called invariant under the given operator ] for any .this means that there exist functions such that = \phi_1(c_1, .... ,c_n)f_1(x)+ ...... +\phi_n(c_1, .... ,c_n)f_n(x),\ ] ] where are arbitrary constants .+ once the set of functions that form the invariant subspace has been determined , we can search an exact solution of in the invariant subspace in the form where . in this way, we arrive to a system of odes . in many cases , this problem is simpler than the original one and allows to find exact solutions by just separating variables .we refer to the monograph for further details and applications of this method .recent applications of the invariant subspace method to find explicit solutions for nonlinear fractional differential equations have been discussed in different papers , for example in .v. galaktionov and s. svirshchevskii , _ exact solutions and invariant subspaces of nonlinear partial differential equations in mechanics and physics . _chapman and hall / crc applied mathematics and nonlinear science series , ( 2007 ) r. gazizov and a. kasatkin , construction of exact solutions for fractional order differential equations by the invariant subspace method , _ computers and mathematics with applications _ , 66(5 ) : 576584 , ( 2013 ) r.metzler , j.h .jeon , a.g .cherstvy , e. barkai , anomalous diffusion models and their properties : non - stationarity , non - ergodicity , and ageing at the centenary of single particle tracking , _ physical chemistry chemical physics _ , 16.44 ( 2014 ) : 2412824164 .r. sahadevan , t. bakkyaraj , invariant subspace method and exact solutions of certain nonlinear time fractional partial differential equations , _ fractional calculus and applied analysis _ , 18(1 ) : 146-162 , ( 2015 )
|
we study nonlinear heat conduction equations with memory effects within the framework of the fractional calculus approach to the generalized maxwell cattaneo law . our main aim is to derive the governing equations of heat propagation , considering both the empirical temperature dependence of the thermal conductivity coefficient ( which introduces nonlinearity ) and memory effects , according to the general theory of gurtin and pipkin of finite velocity thermal propagation with memory . in this framework , we consider in detail two different approaches to the generalized maxwell cattaneo law , based on the application of long tail mittag leffler memory function and power law relaxation functions , leading to nonlinear time fractional telegraph and wave type equations . we also discuss some explicit analytical results to the model equations based on the generalized separating variable method and discuss their meaning in relation to some well known results of the ordinary case .
|
( * * l**eo s * * p**arallel * * ar**chitecture and * * d**atastructures ) is designed as a generic system platform for implementing higher - order ( ho ) logic based knowledge representation , and reasoning tools . in particular , provides the base layer of the new ho automated theorem prover ( atp ) , the successor of the well known provers and .previous experiments with and the oants mechanism indicate a flexible , multi - agent blackboard architecture is well - suited for automating ho logic . however , ( due to project constraints ) such an approach has not been realized in . instead, the focus has been on the proof search layer in combination with a simple , sequential collaboration with an external first - order ( fo ) atp .also provides improved term data structures , term indexing , and term sharing mechanisms , which unfortunately have not been optimally exploited at the clause and the proof search layer . for the development of the philosophy thereforehas been to allocate sufficient resources for the initial development of a flexible and reusable system platform .the goal has been to bundle , improve , and extend the features with the highest potential of the predecessor systems , and oants .the result of this initiative is , which is written in scala and currently consists of approx .13000 lines of code .combines a sophisticated data structure layer ( polymorphically typed -calculus with nameless spine notation , explicit substitutions , and perfect term sharing ) , with a multi - agent blackboard architecture ( supporting prover parallelism at the term , clause , and search level ) and further tools including a parser for all tptp syntax dialects , generic support for interfacing with external reasoners , and a command line interpreter .such a combination of features and support tools is , up to the authors knowledge , not matched in related ho reasoning frameworks .the intended users of the package are implementors of ho knowledge representation and reasoning systems , including novel atps and model finders .in addition , we advocate the system as a platform for the integration and coordination of heterogeneous ( external ) reasoning tools .data structure choices are a critical part of a theorem prover and permit reliable increases of overall performance when implemented and exploited properly .key aspects for efficient theorem proving have been an intensive research topic and have reached maturity within fo - atps .naturally , one would expect an even higher impact of the data structure choices in ho - atps .however , in the latter context , comparably little effort has been invested yet probably also because of the inherently more complex nature of ho logic .[ [ term - language . ] ] term language .+ + + + + + + + + + + + + + the term language extends the simply typed -calculus with parametric polymorphism , yielding the second - order polymorphically typed -calculus ( corresponding to in barendregt s -cube ) .in particular , the system under consideration was independently developed by reynolds and girard and is commonly called system f today . further extensions , for example to admit dependent types , are future work .thus , supports the following type and term language : + & \quad | \ ; \alpha \hspace{2.5em } \text{\small(type variable ) } \\[-.4em ] & \quad | \ ; \tau \to \nu \hspace{.5em } \text{\small(abstraction type ) } \\[-.4em ] & \quad | \ ; \forall \alpha . \; \tau \hspace{.8em } \text{\small(polymorphic type ) } \\[.5em ] \end{split}\ ] ] & \quad | \ ; ( \lambda x_{\tau } \ : s_\nu)_{\tau \to \nu } \ ; | \ ; ( s_{\tau \to \nu } \ ; t_\tau)_\nu \hspace{2.4em } \text{\small(term abstr ./ appl . ) } \\[-.4em ] & \quad | \ ; ( \lambda \alpha \ : s_\tau)_{\forall \alpha\;\tau } \hspace{.35em } \ ; | \ ; ( s_{\forall \alpha\;\tau } \ ; \nu)_{\tau[\alpha/\nu ] } \hspace{.9em } \text{\small(type abstr . / appl . ) } \end{split}\ ] ] an example term of this language is : + [ [ nameless - representation . ] ] nameless representation .+ + + + + + + + + + + + + + + + + + + + + + + + internally , employs a locally nameless representation ( both at the type and term level ) , that extends de - bruijn indices to ( bound ) type variables .the definition of de - bruijn indices for type variables is analogous to the one for term variables .thus , the above example term is represented namelessly as where de - bruijn indices for type variables are underlined .[ [ spine - notation - and - explicit - substitutions . ] ] spine notation and explicit substitutions .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + on top of nameless terms , employs spine notation and explicit substitutions .the first technique allows quick head symbol queries , and efficient left - to - right traversal , e.g. for unification algorithms .the latter augments the calculus with substitution closures that admit efficient ( partial ) -normalization runs .internally , the above example reads where combines function _ heads _ to argument lists ( _ spines _ ) in which denotes concatenation of arguments .[ [ term - sharingindexing . ] ] term sharing / indexing .+ + + + + + + + + + + + + + + + + + + + + + terms are perfectly shared within , meaning that each term is only constructed once and then reused between different occurrences .this does not only reduce memory consumption in large knowledge bases , but also allows constant - time term comparison for syntactic equality using the term s pointer to its unique physical representation .for fast ( sub-)term retrieval based on syntactical criteria ( e.g. head symbols , subterm occurrences , etc . ) from the term indexing mechanism , terms are kept in -normal -long form .[ [ suite - of - normalization - strategies . ] ] suite of normalization strategies .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + comes with a number of different ( heuristic ) -normalization strategies that adjust the standard leftmost - outermost strategy with different combinations of strict and lazy substitution composition resp .normalization and closure construction .-normalization is invariant wrt .-normalization of spine terms and hence -normalization ( to long form ) is applied only once for each freshly created term .[ [ evaluation - and - findings . ] ] evaluation and findings .+ + + + + + + + + + + + + + + + + + + + + + + + a recent empirical evaluation has shown that there is _ no single best reduction strategy _ for ho - atps .more precisely , for different tptp problem categories this study identified different best reduction strategies .this motivates future work in which machine learning techniques could be used to suggest suitable strategies .in addition to supporting classical , sequential theorem proving procedures , offers means for breaking the global atp loop down into a set of subtasks that can be computed in parallel .this also includes support for subprover parallelism as successfully employed , for example , in isabelle / hol s sledgehammer tool .more generally , is construed to enable parallalism at various levels inside an atp , including the term , clause , and search level . for this , provides a flexible multi - agent blackboard architecture .[ [ blackboard - architecture . ] ] blackboard architecture .+ + + + + + + + + + + + + + + + + + + + + + + + process communication in is realized indirectly via a blackboard architecture .the blackboard is a collection of globally shared and accessible data structures which any process , i.e. agent , can query and manipulate at any time in parallel . from the blackboard s perspective each process is a specialist responsible for exactly one kind of problem .the blackboard is generic in the data structures , i.e. it allows the programmer to add various kinds data structures for any kind of data .insertion into the data structures is handled by the blackboard .hence , each specialist can indeed by specialized on a single data structure .the blackboard mechanism and associated data structures provide specific support for nested and - or search trees , meaning that sets of formulae can be split into ( nested ) and - or contexts .moreover , for each supercontext respective tptp szs status information is automatically inferred from the statuses of its subcontexts .[ [ agents . ] ] agents .+ + + + + + + in specialist processes can be modeled as agents .classically , agents are composed of three components : environment perception , decision making , and action execution .the perception of agents is trigger - based , meaning that each agent is notified by a change in the blackboard .agents are to be seen as homomorphisms on the blackboard data together with a filter when to apply an action .depending on the perceived change of the resp .state of the blackboard an agent decides on an action it wants to execute .[ [ auction - scheduler . ] ] auction scheduler .+ + + + + + + + + + + + + + + + + + action execution in is coordinated by an auction based scheduler , which implements an own approximation algorithm for combinatorical auctions .more precisely , each agent computes and places a bid for the execution of its action(s ) .the auction based scheduler then tries to maximize the global benefit of the particular set of actions to choose .this selection mechanism works uniformly for all agents that can be implemented in .balancing the value of the actions is therefore crucial for the performance and the termination of the overall system .a possible generic solution for the agents bidding is to apply machine learning techniques to optimize the bids for the best overall performance .this is future work .note that the use of advanced agent technology in is optional .a traditional atp can still be implemented , for example , as a single , sequential reasoner instantiating exactly one agent in the framework . [ [ agent - implementation - examples . ] ] agent implementation examples .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for illustration purposes , some agent implementations have been exemplarily included in the package .for example , simple agents for _ simplification _ , _ skolemization _ , _ prenex - form _ , _ negation - normal - form _ and _ paramodulation _ are provided . moreover ,the agent - based integration of external atps is demonstrated and their parallelization is enabled by the agent framework .this includes agents embodying and satallax running remotely on the systemontptp servers in miami .these example agents can be easily adapted for other tptp compliant atps .each example agent comes with an applicability filter , an action definition and an auction value computation .the provided agents suffice to illustrate the working principles of the multi - agent blackboard architecture to interested implementors .after the official release of , further , more sophisticated agents will be included and offered for academic reuse .the framework provides useful further components .for example , a generic parser is provided that supports all tptp syntax dialects . moreover ,a command line interpreter supports fine grained interaction with the system .this is useful not only for debugging but also for training and demonstration purposes .as pointed at above , useful support is also provided for the integration of external reasoners based on the tptp infrastructure .this also includes comprehensive support for the tptp szs result ontology .moreover , ongoing and future work aims at generic means for the transformation and integration of ( external ) proof protocols , ideally by exploiting results of projects such as proofcert .there is comparably little related work to , since higher - order theorem provers typically implement their own data structures .related systems ( mostly concerning term representation ) include and teyjus , the abella interactive theorem prover , and the logical framework twelf .m. abadi , l. cardelli , p .-curien , and j .- j . levy .explicit substitutions . in _ proc . of the 17th acm sigplan - sigact symposium on principles of programming languages _ , popl 90 , pages 3146 , new york , ny , usa , 1990 .acm . c. benzmller , f. theiss , l. paulson , and a. fietzke .- a cooperative automatic theorem prover for higher - order logic ( system description ) . in _ proc . of ijcar 2008_ , volume 5195 of _ lncs _ , pages 162170 .springer , 2008 . c. liang , d. mitchell .system description : teyjus - a compiler and abstract machine based implementation of . in _ automated deduction cade-16_ , lncs , vol .1632 , pp .springer berlin heidelberg ( 1999 ) f. pfenning , c. schrmann .system description : twelf a meta - logical framework for deductive systems . in _ automated deduction _ , cade-16 , trento , italy , july 7 - 10 , 1999 , proceedings .. 202206 ( 1999 ) a. steen .efficient data structures for automated theorem proving in expressive higher - order logics .master s thesis , freie universitt berlin , 2014. http://userpage.fu-berlin.de/lex/drop/steen_datastructures.pdf .m. wisniewski .agent - based blackboard architecture for a higher - order theorem prover .master s thesis , freie universitt berlin , 2014 .
|
supports the implementation of knowledge representation and reasoning tools for higher - order logic(s ) . it combines a sophisticated data structure layer ( polymorphically typed -calculus with nameless spine notation , explicit substitutions , and perfect term sharing ) with an ambitious multi - agent blackboard architecture ( supporting prover parallelism at the term , clause , and search level ) . further features of include a parser for all tptp dialects , a command line interpreter , and generic means for the integration of external reasoners .
|
the radio frequency ( rf ) spectrum used in industrial , scientific and medical radio bands and telecommunication radio bands are crowded with various wireless communication systems . recently ,optical wireless communication technology , where information is conveyed through optical radiations in free space in outdoor and indoor environments , is emerging as a promising complementary technology to rf communication technology . while communication using infrared wavelengths has been in existence for quite some time , , more recent interest centers around indoor communication using visible light wavelengths , .a major attraction in indoor visible light communication ( vlc ) is the potential to simultaneously provide both energy - efficient lighting as well as high - speed short - range communication using inexpensive high - luminance light - emitting diodes ( led ) .several other advantages including no rf radiation hazard , abundant vlc spectrum at no cost , and very high data rates make vlc increasingly popular . for example , a 3 gbps single - led vlc link based on ofdm has been reported recently .also , multiple - input multiple - output ( mimo ) techniques , which are immensely successful and popular in rf communications , , can be employed in vlc systems to achieve improved communication efficiencies ,, . in particular, it has been shown that mimo techniques can provide gains in vlc systems even under line - of - sight ( los ) conditions which provide only little channel differences .our new contribution in this paper is the investigation of _ generalized spatial modulation ( gsm ) _ , an attractive mimo transmission scheme , in the context of vlc .such a study , to our knowledge , has not been reported before . in the context of vlc systems ,mimo techniques including spatial multiplexing ( smp ) , space shift keying ( ssk ) , generalized space shift keying ( gssk ) , and spatial modulation ( sm ) have been investigated in the literature - . in smp , there are leds at the transmitter and all of them are activated simultaneously in a given channel use , such that symbols from a positive real - valued -ary pulse amplitude modulation ( pam ) alphabet are sent in a channel use .thus , the transmission efficiency in smp is bits per channel use ( bpcu ) . in ssk , there are leds , out of which only one will be activated in a given channel use .the led to be activated is chosen based on information bits .only the index of this active led will convey information bits , so that the transmission efficiency is bpcu .this means that a large number of leds is needed to achieve high transmission efficiencies in ssk .that is , since , the number of leds required in ssk is exponential in the transmission efficiency . on the other hand , ssk has the advantage of having no interference , since only one led will be active at any given time and the remaining leds will be off .gssk is a generalization of ssk , in which out of leds will be activated in a given channel use , and the indices of the active leds will convey information bits - .since there are possibilities of choosing the active leds , the transmission efficiency in gssk is given by bpcu .sm is similar to ssk ( i.e. , one out of leds is activated and this active led is chosen based on information bits ) , except that in sm a symbol from a positive real - valued -ary pam alphabet is sent on the active led .so , the transmission efficiency in sm is bpcu .a comparative study of smp and sm in vlc systems has shown that , for the same transmission efficiency , sm outperforms smp under certain geometric conditions . like the generalization of ssk to gssk, it is possible to generalize sm .that is , activate out of leds in a given channel use , and , on each active led , send a symbol from a positive real - valued -ary pam alphabet .such a scheme , referred to as _ generalized spatial modulation ( gsm ) _ , then has a transmission efficiency of bpcu .note that both sm and smp become special cases of gsm for and , respectively .gsm in the context of rf communications has been investigated in the literature - .however , gsm in the context of vlc systems has not been reported so far .our contribution in this paper attempts to fill this gap .in particular , we investigate , through analysis and simulations , the performance of gsm in comparison with other mimo schemes including smp , ssk , gssk , and sm .our performance study reveals favorable results for gsm compared to other mimo schemes .the rest of this paper is organized as follows . in sec .[ sec2 ] , we present the considered indoor vlc system model . in sec .[ sec3 ] , we present the gsm scheme for vlc . in sec .[ sec4 ] , we derive an upper bound on the bit error probability of gsm for maximum likelihood ( ml ) detection in vlc . in sec .[ sec5 ] , we present a detailed performance comparison between gsm and other mimo schemes in vlc . finally , conclusions are presented in sec .consider an indoor vlc system with leds ( transmitter ) and photo detectors ( receiver ) .we assume that the leds have a lambertian radiation pattern , . in a given channel use ,each led is either off or emits light of some positive intensity , where is the set of all possible intensity levels .an led which is off is considered to send a signal of intensity zero .let denote the transmit signal vector , where the element of is .let denote the optical mimo channel matrix , given by where is the channel gain between led and photo detector , and .as in , we consider only the line - of - sight ( los ) paths between the leds and the photo detectors , and assume no time - dispersion ( because of negligible path delay differences between leds and photo detectors ) . from , the los channel gain calculated as ( see fig .[ sys ] for the definition of various angles in the model ) where is the angle of emergence with respect to the source ( led ) and the normal at the source , is the mode number of the radiating lobe given by is the half - power semiangle of the led , is the angle of incidence at the photo detector , is the area of the detector , is the distance between the source and the detector , fov is the field of view of the detector , and the leds and the photo detectors are placed in a room of size 5m.5 m as shown in fig .the leds are placed at a height of 0.5 m below the ceiling and the photo detectors are placed on a table of height 0.8 m .let denote the distance between the leds and denote the distance between the photo detectors ( see fig .[ place ] ) .we choose as 0.6 m and as 0.1 m .for example , when , the placement of leds and photo detectors is depicted in figs .[ placement1],[placement2 ] . when , the placement of leds is depicted in fig .[ placement3 ] . assuming perfect synchronization ,the received signal vector at the receiver is given by where is an -dimensional vector with exactly non - zero elements such that each element in belongs to , is the responsivity of the detector and is the noise vector of dimension .each element in the noise vector is the sum of received thermal noise and ambient shot light noise , which can be modeled as i.i.d .real awgn with zero mean and variance .the average received signal - to - noise ratio ( snr ) is given by where $ ] , and is the row of .in gsm , information bits are conveyed not only through modulation symbols sent on active leds , but also through indices of the active leds . in each channel use, the transmitter selects out of leds to activate .this selection is done based on information bits .each active led emits an -ary intensity modulation symbol , where is the set of intensity levels given by where and is the mean optical power emitted .therefore , the total number of bits conveyed in a channel use in gsm is given by let denote the gsm signal set , which is the set of all possible gsm signal vectors that can be transmitted . out of the possible led activation patterns - tuple of the indices of the active leds in any given channel use .] , only activation patterns are needed for signaling ._ example 1 : _ let and . in this configuration ,the number of bits that can be conveyed through the led activation pattern is = 2 bits .let the number of intensity levels be , where and .this means that one bit on each of the active led is sent through intensity modulation .therefore , the overall transmission efficiency is 4 bpcu . in each channel use, four bits from the incoming bit stream are transmitted .of the four transmitted bits , the first two correspond to the led activation pattern and the next two bits correspond to the intensity levels of the active leds .this gsm scheme is illustrated in fig .[ gsm ] , where the first two bits ` 01 ' choose the active leds pair and the second two bits ` 10 ' choose the intensity levels , where led 1 emits intensity , led 3 emits intensity , and the other leds remain inactive ( off ) . in this example , we require only 4 activation patterns out of possible activation patterns . so the gsm signal set for this example can be chosen as follows : \frac{2}{3 } \\[0.5em ] 0 \\ 0\end{bmatrix } , \begin{bmatrix } \frac{2}{3 } \\[0.5em ] \frac{4}{3 } \\[0.5em ] 0 \\ 0\end{bmatrix } , \begin{bmatrix } \frac{4}{3 } \\[0.5em ] \frac{2}{3 } \\[0.5em ] 0 \\ 0\end{bmatrix } , \begin{bmatrix } \frac{4}{3 } \\[0.5em ] \frac{4}{3 } \\[0.5em ] 0 \\ 0\end{bmatrix } , \begin{bmatrix } \frac{2}{3 } \\[0.5em ] 0 \\ \frac{2}{3 } \\[0.5em ] 0\end{bmatrix } , \begin{bmatrix } \frac{2}{3 } \\[0.5em ] 0 \\ \frac{4}{3 } \\[0.5em ] 0\end{bmatrix } , \begin{bmatrix } \frac{4}{3 } \\[0.5em ] 0 \\ \frac{2}{3 } \\[0.5em ] 0\end{bmatrix } , \begin{bmatrix } \frac{4}{3 } \\[0.5em ] 0 \\ \frac{4}{3 } \\[0.5em ] 0\end{bmatrix } , \right . \nonumber \\ & & \left .\hspace{-1 mm } \begin{bmatrix } 0 \\[0.5em ] \frac{2}{3 } \\ 0\\ \frac{2}{3}\\[0.5em]\end{bmatrix } , \begin{bmatrix } 0 \\[0.5em ] \frac{2}{3 } \\0\\ \frac{4}{3}\\[0.5em]\end{bmatrix } , \begin{bmatrix } 0 \\[0.5em ] \frac{4}{3 } \\ 0\\ \frac{2}{3}\\[0.5em]\end{bmatrix } , \begin{bmatrix } 0 \\[0.5em ] \frac{4}{3 } \\0\\ \frac{4}{3}\\[0.5em]\end{bmatrix } , \begin{bmatrix } 0\\ 0 \\[0.5em ] \frac{2}{3 } \\[0.5em ] \frac{2}{3}\end{bmatrix } , \begin{bmatrix } 0\\ 0 \\[0.5em ] \frac{2}{3 } \\[0.5em ] \frac{4}{3}\end{bmatrix } , \begin{bmatrix } 0\\ 0 \\[0.5em ] \frac{4}{3 } \\[0.5em ] \frac{2}{3}\end{bmatrix } , \begin{bmatrix } 0\\ 0 \\[0.5em ] \frac{4}{3 } \\[0.5em ] \frac{4}{3}\end{bmatrix } \right \}. \nonumber\end{aligned}\ ] ] .,height=182 ] _ example 2 : _ let and . to achieve a transmission efficiency of 8 bpcu ,we need four intensity levels , . in this case, we need only 16 activation patterns out of possible activation patterns .the choice of these activation patterns will determine the performance of the gsm system , since choosing a particular activation pattern can alter the minimum euclidean distance between any two gsm signal vectors and for a given , which is given by similarly , the average euclidean distance between any two vectors and for a given is given by since in ( [ dmin ] ) and in ( [ davg ] ) influence the link performance , we use them as the metrics based on which the optimum placement of leds is chosen .specifically , we choose the placement of the leds at the transmitter such that the and of the placement are maximized over all possible placements , as follows .we first choose the placement(s ) for which the is maximum .for placement of leds in a grid , we enumerate all possible led placements in the grid and compute the in ( [ dmin ] ) for all these placements and choose the one with the maximum .if there are multiple placements for which is maximum , we then compute as per ( [ davg ] ) for these placements and choose the one with the maximum .for example , for the system parameters specified in table [ tab1 ] and a required transmission efficiency of 8 bpcu ( using ) , the best placement of leds in a grid that maximizes and is shown in fig .[ placements](a ) .likewise , the best led placements for systems with ( , 5 bpcu ) , ( , 8 bpcu ) , ( , 8 bpcu ) , and ( , 8 bpcu ) in a grid are as shown in figs . [placements](b),(c),(d),(e ) , respectively ..[tab1 ] system parameters in the considered indoor vlc system .[ cols= " < , < , < " , ] bpcu ..,height=240 ] here , we present the ber performance of gsm in vlc as a function of the spacing between the leds ( ) by fixing other system parameters .figure [ dtx ] presents the ber performance of gsm as a function of in vlc with bpcu , for different values of snr = 75 db , 60 db , 40 db .it can be observed from fig .[ dtx ] that there is an optimum spacing which achieves the best ber performance ; below and above this optimum spacing , the ber performance gets worse .the optimum is found to be 1 m in fig .this optimum spacing can be explained as follows . on the one hand ,the channel gains get weaker as increases .this reduces the signal level received at the receiver , which is a source of performance degradation . on the other hand , the channel correlation also gets weaker as is increased .this reduced channel correlation is a source of performance improvement .these opposing effects of weak channel gains and weak channel correlations for increasing leads to an optimum spacing . in vlc with bpcu , , for different values of snr = 75 db , 60 db , 40 db.,height=240 ] here, we present the effect of varying the half - power semiangle ( ) on the ber performance of gsm in vlc . in fig .[ phi ] , we present the ber as a function of in a vlc system bpcu , and .ber versus plots for snr = 45 db , 60 db are shown .it can be observed that the ber performance is good for small half - power semiangles , and it degrades as the half - power semiangle is increased .this is because , fixing all other system parameters as such and decreasing increases the mode number , and hence the channel gain .this increased channel gain for decreasing is one reason for improved ber at small .another reason is that the channel correlation decreases as decreases .this decreased channel correlation also leads to improved performance at small .in vlc with bpcu , , .,height=240 ]in this section , we compare the performance of gsm with those of other mimo schemes including smp , ssk , gssk , and sm , for the same transmission efficiency . in all cases , optimum placement of leds in a gridis done based on maximizing and , as described in sec .[ opt_place ] . in fig .[ 4bpcu ] , we present the ber performance of smp , ssk , gssk , sm , and gsm , all having a transmission efficiency of bpcu . a gsm system with which uses only 4 activation patterns chosen out of activation patterns and gives 4 bpcu is considered .the optimum placement of leds for this gsm system is as shown in fig .[ new_place](a ) .the other mimo schemes with 4 bpcu transmission efficiency considered for comparison are : smp : , leds placement as in fig .[ placements](a ) , ssk : , leds placement as in fig .[ place](c ) , i.e. , one led on each of the grid point , gssk : , leds placement as in fig .[ new_place](b ) , and sm : , leds placement as in [ placements](a ) . from fig .[ 4bpcu ] , it can be seen that sm outperforms smp , which is due to spatial interference in smp .it is also observed that sm performs better than ssk and gssk .this is because ssk has more leds and hence the and in ssk are smaller than those in sm .also , in gssk , 2 leds are activated simultaneously leading to spatial interference , and this makes gssk to perform poorer than sm .both ssk and gssk perform better than smp , due to the dominance of spatial interference in smp .it is further observed that gsm performs almost the same as sm , with marginally inferior performance at low snrs ( because of the effect of spatial interference in gsm ) and marginally better performance at high snrs ( because of better and in gsm ) .the performance advantage of gsm over sm at high snrs is substantial at 8 bpcu transmission efficiency ( about 10 db advantage at ber ) , which is illustrated in fig . [figure [ 8bpcu ] compares the performance of the following systems , all having 8 bpcu efficiency : smp : , leds placement as in [ placements](a ) , gssk : , leds placement as in [ new_place](c ) , sm : , leds placement as in [ place](c ) , and gsm : , leds placement as in [ placements](c ) . from fig . [8bpcu ] , it is observed that gsm achieves the best performance among the considered schemes at moderate to high snrs ( better by about 10 db compared to sm , and by about 25 db compared to gssk and smp at ber ) .the reason for this is as explained in the performance comparison in fig .[ 4bpcu ] . grid . indicates the presence of an led and indicates of absence of led.,height=134 ] bpcu ..,height=244 ] bpcu ..,height=244 ] bpcu , , , .,height=240 ] in fig .[ 10bpcu ] , we compare the ber performance of sm and gsm in vlc , both having the same bpcu , , and .the sm and gsm system parameters are : sm : , and gsm : .the placement of leds in both cases is as in fig .[ placements](a ) .it is observed that gsm significantly outperforms sm ( by about 25 db at ber ) .this performance advantage of gsm over sm can be attributed to the following reasons .the channel matrix becomes less correlated for , which results in less spatial interference in gsm . despite the presence of multiple active leds ( ) and hence spatial interference in gsm , to achieve 10 bpcu transmission efficiency , gsm requires a much smaller - sized modulation alphabet compared to that required in sm .the better power efficiency in a smaller - sized modulation alphabet compared to a larger - sized alphabet dominates compared to the degrading effect of spatial interference due to , making gsm to outperform sm .we investigated the performance of gsm , an attractive mimo transmission scheme , in the context of indoor wireless vlc .more than one among the available leds are activated simultaneously in a channel use , and the indices of the active leds also conveyed information bits in addition to the information bits conveyed by the intensity modulation alphabet . to our knowledge ,such a study of gsm in vlc has not been reported before .we derived an analytical upper bound on the ber of gsm with ml detection in vlc .the derived bound was shown to be very tight at moderate to high snrs .the channel gains and channel correlations influenced the gsm performance such that the best ber is achieved at an optimum led spacing . also , the gsm performance in vlc improved as the half - power semi - angle of the leds is decreased .we compared the ber performance of gsm with those of other mimo schemes including smp , ssk , gssk and sm .analysis and simulation results revealed favorable performance for gsm compared to other mimo schemes .j. barry , j. kahn , w. krause , e. lee , and d. messerschmitt , `` simulation of multipath impulse response for indoor wireless optical channels , '' _ ieee j. sel .areas in commun .367 - 379 , apr . 1993 .d. tsonev , h. chun , s. rajbhandari , j. j. d. mckendry , d. videv , e. gu , m. haji , s. watson , a. e. kelly , g. faulkner , m. d. dawson , h. haas , and d. obrien , `` a 3-gb / s single - led ofdm - based wireless vlc link using a gallium nitride , '' _ ieee photonics tech . lett .26 , no . 7 , pp .637 - 640 , jan . 2014 .t. q. wang , y. a. sekercioglu , and j. armstrong , `` analysis of an optical wireless receiver using a hemispherical lens with application in mimo visible light communications , '' _ j. lightwave tech .1744 - 1754 , jun . 2013 .w. popoola , e. poves , and h. haas , `` generalised space shift keying for visible light communication , '' _ proc .symp . on commun .systems , networks and digital signal processing ( csndp 2012 ) _ , pp . 1 - 4 , jul .2012 .w. o. popoola and h. haas , `` demonstration of the merit and limitation of generalised space shift keying for indoor visible light communications , '' _ j. lightwave tech .1960 - 1965 , may 2014 .j. wang , s. jia , and j. song , `` generalised spatial modulation system with multiple active transmit antennas and low complexity detection scheme , '' _ ieee trans .wireless commun .1605 - 1615 , apr . 2012 .l. zeng , d. obrien , h. le minh , k. lee , d. jung , and y. oh , `` improvement of date rate by using equalization in an indoor visible light communication system , '' _ proc .ieee iccsc 2008 _ , pp .678 - 682 , may 2008 .
|
in this paper , we investigate the performance of generalized spatial modulation ( gsm ) in indoor wireless visible light communication ( vlc ) systems . gsm uses light emitting diodes ( led ) , but activates only of them at a given time . spatial modulation and spatial multiplexing are special cases of gsm with and , respectively . we first derive an analytical upper bound on the bit error rate ( ber ) for maximum likelihood ( ml ) detection of gsm in vlc systems . analysis and simulation results show that the derived upper bound is very tight at medium to high signal - to - noise ratios ( snr ) . the channel gains and channel correlations influence the gsm performance such that the best ber is achieved at an optimum led spacing . also , for a fixed transmission efficiency , the performance of gsm in vlc improves as the half - power semi - angle of the leds is decreased . we then compare the performance of gsm in vlc systems with those of other mimo schemes such as spatial multiplexing ( smp ) , space shift keying ( ssk ) , generalized space shift keying ( gssk ) , and spatial modulation ( sm ) . analysis and simulation results show that gsm in vlc outperforms the other considered mimo schemes at moderate to high snrs ; for example , for 8 bits per channel use , gsm outperforms smp and gssk by about 21 db , and sm by about 10 db at ber .
|
the behaviour of nuclear matter at the extreme densities reached in neutron - star cores is determined by the properties of fundamental interactions in regimes that are still poorly known and that are not accessible in experiments on earth . a broad variety of different models for supernuclear matter , and thus for the inner structure of neutron stars , has been proposed .although neutron - star properties depend sensitively on the equation of state ( eos ) , approximately `` universal '' relations between several dimensionless quantities have been found during the last years .the `` universal '' character in these relations comes from their weak dependence on the eos and this allows one to use them to constrain quantities that are difficult to access experimentally , such as the neutron star radius . already in 1994, highlighted an apparently universal relation between the normalized moment of inertia and the stellar compactness in the case of eoss without an extreme softening at supernuclear densities .this relation was later refined by and , and then employed by to point out that using such an empirical relation it is possible to estimate the radius of a neutron star via the combined measurement of the mass and moment of inertia of a pulsar in a binary system .additional evidence that eos - independent relations could be found among quantities characterizing compact stars was also pointed out by [ and later on revisited by ] , who showed that a tight correlation exists between the frequency of the fundamental mode of oscillation and the stellar average density .more recently , however , have found that suitably normalized expressions for the moment of inertia , the quadrupole moment and the tidal love number are related by functions independent of the eos to within in the slow - rotation approximation and assuming small tidal deformations [ see also the related and earlier work by ; ] .these relations are particularly useful as they may help remove some degeneracies as those appearing , for instance , in the modelling of the gravitational - wave signal from inspiralling binaries [ see ; for a discussion ] .the universality , however , should be meant as approximate and is indeed preserved only within large but defined regimes , such as the slow - rotation approximation or when the magnetic fields are not particularly strong .more specifically , it has been shown by that the universality in the relation between and breaks down in the case of rapid rotation for sequences with constant spin frequency .this result was partially revised by , who have shown that the universality is partly recovered if the rotation is characterized not by the spin frequency , but by the dimensionless angular momentum , where and are the angular momentum and mass of the star , respectively [ see also ] . at the same time , have shown that the universality between and is lost for stars with long spin periods , i.e. , , and strong magnetic fields , i.e. , g .the reason behind this behaviour is rather simple : the anisotropic stresses introduced by a magnetic field can break the overall spherical symmetry present for stars that are nonrotating or in slow rotation . in turn , this affects the universal behaviour , which is based on the balance between gravity and the behaviour of fluids in strong gravitational fields . over the last couple of years ,this area of research has been extremely active , with investigations that have considered universality also with higher multipoles or that have sought universality in response to alternative theories of gravity .a large bulk of work has also tried to provide a phenomenological justification for the existence of such a behaviour .for instance , it has been suggested that , and are determined mostly by the behaviour of matter at comparatively low rest - mass densities , where realistic eoss do not differ significantly because they are better constrained by nuclear - physics experiments .alternatively , it has been suggested that the multipole moments approach the limiting values of a black hole towards high compactness , which implies that approximate no - hair - like relations exist also for non - vacuum spacetimes .finally , it has been proposed that nuclear - physics eoss are stiff enough that the nuclear matter can be modelled with an incompressible eos . in this interpretation , low - mass stars , which are composed mainly of soft matter at low densities ,would depend more sensitively on the underlying eos , while the universality is much stronger towards higher compactness , where matter approaches the limit of an incompressible fluid . in this paper , we take a slightly different view and reconsider well - known universal relations to devise an effective tool to constrain the radius of a compact star from the combined knowledge of the mass and moment of inertia in a binary system containing a pulsar . more specifically ,we first show that a universal relation holds for the maximum mass of uniformly rotating stars when expressed in terms of the dimensionless and normalized angular momentum .this relation extends and provides a natural explanation for the evidence brought about by of a proportionality between the maximum mass allowed by uniform rotation and the maximum mass of the corresponding nonrotating configuration .finally , we show that the dimensionless moment of inertia for slowly rotating compact stars correlates tightly and universally with the compactness , where is the radius of the star .next , we provide an analytical expression for such a correlation and show that it improves on previous expressions , yielding relative errors in the estimate of the radius that are for a large range of masses and moment of inertia. we should note that universal relations and have been shown to hold by and , respectively .hence , it is not surprising that a universal relation also holds .however , to the best of our knowledge , this is the first time that such a relation is discussed in detail and that its implications are investigated in terms of astrophysical measurements .the plan of the paper is as follows . in section [ sec : setup ] , we briefly review the mathematical and numerical setup used for the calculation of our equilibrium stellar models ( from nonrotating configurations to rapidly rotating ones ) .section [ sec : maximumass ] is dedicated to our results and in particular to the new universal relation between the maximum mass and the normalized angular momentum .section [ sec : results ] is instead dedicated to the comparison between our expression for the dimensionless moment of inertia and previous results in the literature , while in section [ sec : applications ] we discuss how the new relation can be used to deduce the radius of a compact star once a measurement is made of its mass and moment of inertia . finally , section [ sec : conclusions ] contains our conclusions and prospects for future work ; the appendix provides the derivation of an analytic expression to evaluate the relative error in the estimate .we have constructed a large number of equilibrium models of compact stars that are either nonrotating , slowly or rapidly rotating . in the first two cases, solutions can be obtained after the integration of ordinary differential equations in spherical symmetry , while in the third case we have made use of a two - dimensional numerical code solving elliptic partial differential equations .more specifically , we consider the stellar matter as a perfect fluid with energy - momentum tensor where is the fluid 4-velocity , the four - metric , while and are the fluid energy density and pressure , respectively . in the case of non - rotating stars ,we take a spherically symmetric metric where and are metric functions of the radial coordinate only .equilibrium models are then found as solutions of the tolman oppenheimer volkoff ( tov ) equations where the function is defined as these tov equations need to be supplemented with an eos providing a relation between different thermodynamic quantities , and we have used 15 nuclear - physics eoss in tabulated form . for all of them ,beta equilibrium and zero temperature were assumed , so that the eos reduces to a relation between the pressure and the rest - mass density ( or the energy density ) . for more complex eossin which a temperature dependence is available , we have used the slice at the lowest temperature . in our analysiswe have considered 28 several different theoretical approaches to the eos and , in particular : the nuclear many - body theory [ apr4 , ; wff1 , wff2 , ] , the non - relativistic skyrme mean - field model [ rs , sk255 , sk272 , ska , skb , ski2-ski6 , skmp , , sly2 , sly4 , sly9 , ] , the mean - field theory approach [ eos l , ; hs dd2 , hs nl3 , hs tm1 , hs tma , sfho , sfhx , ; gshen - nl3 , , the walecka model [ eos - o , ] and the liquid - drop model [ ls220 ; ] .all of these models are able to support a neutron star with a maximum mass of at least 2.0 and are therefore compatible with the discovery of neutron stars with masses of about 2 . at the same time, we note that because strong phase transitions above nuclear saturation density can affect the radii of low - mass neutron stars , determining whether a phase transition occurs ( and if so at which rest - mass density ) represents an important consideration not contemplated in these eoss , but that will be the focus of future work .it is possible to extend the validity of the tov solutions by considering stellar models in the slow - rotation approximation .however , already in , a full second - order treatment was developed , and subsequently applied by for a systematic study of rotating relativistic stars . ] . in this case , spherical symmetry is still preserved , but rotational corrections do appear at first order in the stellar angular velocity , and the line element is replaced by its slow - rotation counterpart \,,\end{aligned}\ ] ] where , are still functions of the radial coordinate only and represents the angular velocity of the inertial frames dragged by the stellar rotation .the set of equations ( [ eq : m])([eq : nu ] ) needs then to be augmented with a differential equation for the relative angular velocity where is the difference between the angular velocity acquired by an observer falling freely from infinity and the angular velocity of a fluid element measured by an observer at rest at some point in the fluid .a numerical solution to equations ( [ eq : m])([eq : omega ] ) can be obtained by integrating them outwards from the stellar centre using , for instance , a fourth - order runge - kutta algorithm . at the stellar surface, must be matched to the analytic exterior solution which has the form where is the total angular momentum of the star .the integration , which can be started with an arbitrary value for , can then be adjusted so as to match the boundary condition .once a slowly rotating solution with angular momentum and spin frequency has been found , the corresponding moment of inertia can then be computed as where the derivative in the last equality is meant to be taken at the stellar surface .note that at this order the angular momentum depends linearly on the angular velocity , so that the moment of inertia does not depend on .quite generically , the moment of inertia increases almost linearly with the mass for sufficiently small masses ; however , as the maximum mass is approached , it decreases rapidly .this behaviour is due to the fact that depends quadratically on the radius , which decreases significantly near the maximum mass .we should note that treatments higher than the first - order in have been also presented , in , where even the third - order terms were included [ see also ] .these treatments , however , are still not very accurate for rapidly rotating models of compact stars .hence , we have relied on fully numerical solutions constructed using the open - source code rns , which solves the einstein equations in axisymmetry and in the conformally flat approximation . in this case , the spacetime is assumed to be stationary and axisymmetric , and can be described by a metric of the form where the metric potentials , , and are functions of the quasi - isotropic coordinates and .stellar models can then be computed along sequences in which the central energy density and the axis ratio ( or spin frequency ) is varied .with the exception of those works that have looked at the existence of universality in merging systems of neutron - star binaries , all of the work done so far on universal relations in isolated relativistic stars has concentrated on _ stable _ equilibrium configurations .this is of course the most natural part of the space of parameters that is worth investigating .yet , it is interesting to assess whether universality is present also for stellar models that are either marginally stable or that actually represent unstable equilibrium models . to this scope, we have investigated whether a universal behaviour can be found also for models whose gravitational mass is at the critical - mass limit or reasonably close to it .we recall that proved that a sequence of uniformly rotating barotropic stars is secularly unstable on one side of a turning point , i.e. , when it is an extremum of mass along a sequence of constant angular momentum , or an extremum of angular momentum along a sequence of constant rest - mass . furthermore , arguing that viscosity would lead to uniform rotation , they concluded that the turning point should identify the onset of secular instability .while for nonrotating stars the turning point does coincides with the secular - instability point ( and with the dynamical - instability point for a barotropic star if the perturbation satisfies the same equation of state of the equilibrium model ) , for rotating stars it is only a sufficient condition for a secular instability , although it is commonly used to find a dynamical instability in simulations .more recently , have computed the neutral - stability line for a large class of stellar models , i.e. , the set of stellar models whose -mode frequency is vanishingly small ; such a neutral - stability point in a nonrotating star marks the dynamical - stability limit . the latter coincides with the turning - point line of for nonrotating stars , but differs from it as the angular momentum is increased , being located at smaller central rest - mass densities as the angular momentum is increased .stated differently , the results of have shown that equilibrium stellar models on the turning - point line are effectively dynamically _unstable_. this result , which was also confirmed by numerical simulations , does not contradict turning - point criterion since the latter is only a sufficient condition for secular instability [ see also the discussion by ] . determining the critical mass in a way that is independent of the eos is critical in many astrophysical scenarios , starting from those that want to associate fast radio bursts to a `` blitzar '' and hence to the collapse of a supramassive neutron star , over to those that apply this scenario to the merger of binary neutron - star systems and are thus interested in the survival time of the merger to extract information on the eos , or to those scenarios in which the late collapse of the binary merger product can be used to explain the extended x - ray emission in short gamma - ray burst .we should note that this is not the first time that `` universal '' relations in the critical mass are considered .indeed , have already discussed that there is a close correlation between the `` mass - shedding '' ( or keplerian ) frequency and the mass and radius of the maximum mass configuration in the limit of no rotation , i.e. , and .because larger masses can support larger keplerian frequencies , the maximum critical mass is defined as the largest mass for stellar models on the critical ( turning - point ) line . after considering a large set of eoss , have shown that there exists a universal proportionality between the radii and masses of maximally rotating and of static configurations . in view of the simplicity of computing stellar models on the turning - point line and given that these models are very close to those on the neutral - stability line , we have computed the maximum masses of models along constant angular momentum - sequences , i.e. , such that , where is the central rest - mass density .although these models are strictly speaking unstable , for simplicity we have dubbed them `` critical masses '' , .the values of these masses as a function of the angular momentum are reported in the left panel of of fig .[ fig : critical ] , which shows that a large variance exists with the eos when the data is reported in this way .each sequence terminates with the `` maximum '' ( critical ) mass that is supported via ( uniform ) rotation , where is the maximum angular momentum that can be attained normalised to the maximum mass , i.e. , .note that the maximum mass in this case can range from values as small as for , up to for .the same data , however , can be expressed in terms of dimensionless quantities and , more specifically , in terms of the critical mass normalised to the maximum value of the corresponding nonrotating configuration , i.e. , and of the dimensionless angular momentum when the latter is normalized to the maximum value allowed for that eos , .such a data is collected in the right panel of fig .[ fig : critical ] and shows that the variance in this case is extremely small .indeed , it is possible to express such a behaviour with a simple polynomial fitting function of the type where the coefficients are found to be and , with a reduced chi squared and where , of course , for . an immediate consequence of eq .is also a very important result .irrespective of the eos , in fact , the maximum mass that can be supported through uniform rotation is simply obtained after setting and is therefore given by where we have taken as error not the statistical one ( which would be far smaller ) but the largest one shown in the comparison between the fit and the data . in summary ,although different eoss give substantially different maximum masses and are able to reach substantially different angular momenta , the maximum mass that can be supported at the mass - shedding limit is essentially universal and is about 20% larger than the corresponding amount in the absence of rotation . also shown in the bottom part of the panel is the relative error between the normalised critical mass and its estimate coming from the fit , .note that the error for the largest angular momentum is below for all the eoss considered and that it is below for most rotation rates .similar conclusions on the maximum mass were reached also by , although with a larger variance , probably due to the use of eoss that did not satisfy the constraint of and different normalizations .this confirms the idea that the uncertainties in correlations of this type could be further reduced if future observations will increase the value of the maximum mass .more importantly , expression highlights that a universal relation is present not only for the maximum critical mass , but is is valid also for any value of the dimensionless angular momentum , with a variance that is actually smaller for slowly rotating models .this is a result that was not discussed by . despite the important implications that expression has , its use in the form aboveis very limited as it requires the implicit knowledge of , as well as of and .a way round this limitation is possible if one makes the reasonable assumption that for a given angular momentum , the mass and radius of the mass - shedding model is proportional to the mass and radius of the maximum - mass nonrotating model , i.e. , and that the keplerian frequency follows the classical scaling in mass and radius as a result , the angular momentum at the mass - shedding limit can be expressed also as a function of the maximum - mass nonrotating configuration and thus the coefficients , and can be computed via a fit to the data , yielding : , , and , while the coefficient can be derived as . alternatively , and more accurately , can be computed via a direct fit of expression , which then yields .we can now use expression in the dimensionless fit to obtain where the coefficients have numerical values and .expression has the desired features , since the critical mass is now function only of the stellar angular momentum and of the properties ( mass and radius ) of the maximum - mass nonorotating configuration , and .note that expression is not in an explicit form , since the right - hand side still contains information on the critical mass via .however , a simple root - finding algorithm can be used to obtain once , , and are specified .in what follows we discuss universal relations between the moment of inertia and the compactness . we will first consider the case of slowly rotating stars ( section [ sec : ic_sr ] ) and then that of rapidly rotating ones ( section [ sec : ic_rr ] ) . starting with , a number of authors have pointed out that the dimensionless moment of inertia can be expressed in terms of the stellar compactness via simple low - order polynomial expressions that have a weak dependence on the eos .expressions of this type have been proposed by , as well as by , with comparable level of precision .a higher - order polynomial fit was proposed subsequently by ( ls ) , who expressed it as and discussed how such a relation could be used to constrain the stellar radius once and are measured , e.g. , in a pulsar within a binary system .the left panel of fig .[ fig : ls ] reports with a scatter plot the values of relative to slowly rotating stars [ i.e. , stars treated in the slow - rotation approximation ] as computed for a number of different eoss and in a large range of compactnesses , i.e. , with $ ] ; this is also the range in compactness over which the fits are performed .different symbols and colours refer to different eoss as reported in the legend .also indicated with a black solid line is the ls fit expressed by , while the shaded band band reports the error in the fitting coefficients .the fitting coefficients found originally by are : , and , while those produced by our analysis are slightly different and given by : , and .the main differences in the estimates ( especially in the quartic term ) are due to the different set of eoss used and , more importantly , to the fact that the estimates by were based in part on eoss which are now excluded by observations of a 2 neutron star . and found that fitting functions of the type were valid for a wide range of eos except the ones which exhibited extreme softening .this is measured by the relative error , which is reported in the lower part of the left panel of fig .[ fig : ls ] , and which shows the errors are generally below . at the same time, the large scatter of the data suggests that a tighter fit could be obtained if a different normalization is found for the moment of inertia .hence , inspired by the normalization s proposed by and , we have considered to fit the moment of inertia through a functional expansion of the type there are two main motivations behind this choice .the first one is that it is clear that at lowest order and hence an expansion in terms of inverse powers of the compactness is rather natural .the second one is the realisation that universal relations exist between and and between and , so that a universal relation should also exist also between and , although not yet discussed in the literature ..summary of various averaged norms of the the residuals of the two fits , i.e. , the averages over all eoss of all the residuals when normalizing the moment of inertia as or , respectively. the last column reports instead the largest infinity norm , i.e. , the largest residual across all eoss .the values of the last two columns are reported as shaded areas ( red and blue , respectively ) in fig .[ fig : ls ] . [ cols="<,^,^,^,^",options="header " , ] three remarks are worth making .first , the fitting could be further improved if the normalization of the moment of inertia is modified in terms of dimensionless quantities , e.g. , a given power of the compactness as suggested by .while useful to tighten the fit , we are not certain that this approach is effective in general or provides additional insight ; hence , we will not adopt it here .second , the quantity is sometimes associated to what is called the `` effective compactness '' , where one is considering that .clearly , as the fit in expression reveals , this association is reasonable only at the lowest order . finally , while the relation provides a description of the universal behaviour of the moment of inertia that is slightly more accurate than the one captured by the relation , the visual impression provided by fig .[ fig : ls ] enhances the impression of a better fit for the relation . in practice , the much larger range span by across the relevant compactnesses induces one to believe that the universality is far superior in this latter case . as testified by the data in the lower plots , the error is , on average , only slightly smaller . as mentioned in section [ sec : setup ] , within the slow - rotation approximation the properties of the compact stars do not depend on the magnitude of rotation and the surface is still given by a 2-sphere of constant radial coordinate .when the stars are rapidly rotating , on the other hand , not only the effect of the frame dragging needs to be included , but also the change in the stellar surface needs to be taken into account . more specifically , as a rotating star approaches the `` mass - shedding '' ( or keplerian ) limit , that is , the limit at which it is spinning so fast so as to lose matter at the equator , its mass and equatorial radius increase while the polar radius decreases , leading to an overall oblate shape . since the moment of inertia is influenced mostly by the mass in the outer regions of the star , the increases in mass and radius lead quite generically to larger moments of inertia .since the universal relations were first discussed in the limit of slow rotation and small tidal deformations , have investigated the impact of rapid uniform rotation on these relations and , in particular , on the between the moment of inertia for sequences with constant angular velocity .what was found in this case is that the universality was lost , with deviations from the slow - rotation limit of up to 40% with increasing rotation rate .interestingly , however , also found that the universality is restored ( within limits ) if the dimensionless moment of inertia is ordered not along sequences of constant , but along sequences of constant dimensionless spin parameter .in particular , along each sequence the relation is independent of the eos up to about 1% , but it is of course a function of the spin parameter . hence , it is natural to investigate whether the relation we have discussed in the previous section in the case of slowly rotating stars , continues to be valid also in the case of rapid rotation .this is summarised in fig .[ fig : lsrot ] , whose two panels show respectively and as a function of , with the shaded band marking the uncertainty band coming from the fit within the slow - rotation approximation , the data reported in fig . [fig : lsrot ] refers only to the first eoss in fig .[ fig : ls ] , i.e. , eoss : apr4 , sly4 , ls220 , wff1 , wff2 , l , n , o , hs , hs , hs , hs , sfho , sfhx . ] .following , and expecting it will yield a tighter correlation , we order the data along sequences of constant .note that while the data for ( red symbols ) lies almost on top of the results for the slow - rotation limit , the sequences for ( light - green ) show larger deviations , and for ( blue symbols ) the data lies completely outside of the error band for the slow - rotation fit .indeed , along the sequence , which is also near the mass - shedding limit for most eoss , the deviation in the fitted range can be as large as . despite the complete loss of universality at high rotation rates and small compactnesses , and the fact that universality is present only along specific directions ( i.e. , sequences ) , the relation remains effective for a large portion of the space of parameters .we recall , in fact , that the fastest known rotating pulsar has a spin frequency of 716 hz ( or a period of , * ? ? ?* ) , which could correspond to depending on the pulsar s mass . for the eoss used here , however , for this pulsar and would therefore lie within the error band of the fit for the slow - rotation limit for most of that range .for completeness , and following what was done for the slow - rotation approximation , we report in table [ table : fit_coeffs ] the numerical values of the fitting coefficients in eq . for the various sequences considered .already had pointed out that the binding energy of a compact star , defined as the difference between the total rest mass and the gravitational mass of an equilibrium configuration , could show a behaviour that is essentially independent of the eos . in particular , considering the dimensionless binding energy , found a good fit to the data with an expression of the type where and .we have revisited the ansatz using more modern eoss satisfying the two solar - mass constraint and found that the fit to the data yields values for the coefficients and in particular that and .the reduced chi squared for the two different sets of coefficients is different , being for the original fit of and of for our new fit . yet , because expression is effectively a second - order polynomial for small values of , we have also considered a different functional form for the fitting ansatz and which , in the spirit of expression , is a quadratic polynomial in the stellar compactness , i.e. , where and .the fit in this case is marginally better than that obtained with expression , with a reduced chi square of .all things considered , and on the basis of the eoss used here , expression is only a marginally better description of the functional behaviour of the reduced binding energy and should be considered effectively equivalent to expression .interestingly , expression can also be easily extended to encompass the case of rotating stars after replacing the coefficients and with new expressions that contain rotation - induced corrections in terms of the dimensionless angular momentum , i.e. , and in , where where , , , and .note that the linear coefficients are very small , as are the errors in the estimates of the quadratic coefficients .the reduced chi square obtained when comparing the values of the reduced binding energy obtained via with those obtained from the rns is rather small and given by .finally , we note that expressing the binding energy in terms of the radius rather than of the mass changes the functional behaviour of the fitting function since , but not the overall quality of the fit .while the origin of the universal relations is still unclear and the subject of an intense research ( and debate ) , we take here the more pragmatic view in which universal relations are seen as an interesting behaviour of compact stars in a special area of the space of solutions : namely , those of slowly rotating , low - magnetisation stars . in this view, universal relations can be used to constrain phenomenologically quantities which are not directly accessible by observations or whose behaviour is degenerate .for instance , the simultaneous measurement of the mass and of the moment of inertia of a compact star , e.g. , of a pulsar in a binary system of compact stars , does not necessarily provide information on the radius . on the other hand , the same measurements , together with the use of the universal relation ( [ eq : br_fit ] ) can set constraints on the radius and hence on the eos . to illustrate how this can be done in practicelet us consider the case in which the moment of inertia is measured with an observational error of 10% ; this may well be an optimistic expectation but is not totally unrealistic [ a similar assumption was made also by ] .if the mass is observed with a much larger precision [ as it is natural to expect from pulsar measurements in binary neutron - star systems ] , then it is possible to define regions in the plane that are compatible with such observations . in turn , the comparison with the expectations from different eoss will set constraints on the radius of the star . in practice , given a measurements of and , it is sufficient to invert the fitting function ( [ eq : br_fit ] ) to obtain the range of radii that is compatible with the measurements of and . examples of these compatibility regions in the plane are represented by the coloured shaded regions in the left panel of fig .[ fig : radiusconstr ] , where we have considered respectively ( red - shaded region ) , ( green - shaded region ) , and ( blue - shaded region ) ) . ] . given then a measure of the mass (we have considered a canonical mass of and indicated it with an horizontal black dashed line ) , the intersection of the corresponding constraint with a given shaded region sets a range for the possible value of the radius .for instance , a measurement of and would require radii in excess of , thus excluding relatively soft eoss such as apr4 .similar considerations can be made also for rapidly rotating stars .this is shown as is shown in the right - hand panel of fig .[ fig : radiusconstr ] , where we consider again three different hypothetical measurements of the moment of inertia , but also consider the extension of the universal relation along sequences of constant dimensionless spin parameter . we indicate with a black solid line the fit relative to the slowly rotating models and with red , green and blue dashed lines the fits corresponding to , and , respectively . note that even the largest considered value for , the sequences are within the error bands imposed by the observational error on and by the uncertainties on the fitting function .following the considerations above , we next compute the error made in estimating the radius of the star for any eos once a measurement is made of and . to this scopewe need to account both for the error due to the observational uncertainty of the moment of inertia and for the error on the fitting function . because the error on the fit ( [ eq : br_fit ] ) is of the order of 5% for a large part of the considered range in the compactness, the error on the radius made when inverting the fitting function ( [ eq : br_fit ] ) can be estimated via the standard gaussian error - propagation law .more specifically , we assume that the total error in the radius estimate is given by the sum in quadrature of two errors , i.e. , where is the error in the fit when computed as the -norm of the relative deviation of the fit from the data [ cf . , eq . ] , while is the error in the observed moment of inertia .the error on the fit is approximately given by where is the maximum relative error .of course , all what was discussed so far can be applied equally to the new fitting relation as to the original fit by given by expression . )( ls fit , dashed lines ) , or from the fit ( [ eq : br_fit ] ) ( br fit , solid lines ) .different colours refer to different values of the moment of inertia ( red , green and blue lines for and , respectively ) .the error estimates includes an observational uncertainty of the moment of inertia of 10% and the error on the fitting function . ] figure [ fig : err ] summarises the results of this error analysis by reporting the relative error in the measurement of the stellar radius for any eos once the mass of the star is measured to high precision and the moment of inertia is estimated with a relative error of .more specifically , eqs . andare inverted numerically to estimate from and , while is computed as the difference between the radius obtained from the inverted fitting function and the inverted relation with the errors on the moment of inertia and the fitting function taken into account .figure [ fig : err ] shows this relative error as a function of the measured mass and also in this case we report with lines of different colours the three different hypothetical measurements of the moment of inertia .note that we indicate with solid lines the errors deduced using the new fit ( br fit ) and with dashed lines the corresponding fit coming from expression of ( ls fit ) .the two prescriptions provide rather different behaviours in the error estimates as a function of mass .in particular , for small masses , i.e. , , the modelling in terms of and yield comparable errors . however , for large masses , i.e. , , the modelling in terms of yields considerably smaller errors .we have shown that a universal relation is exhibited also by equilibrium solutions that are not stable .in particular , we have considered uniformly rotating configurations on the turning - point line , that is , whose mass is an extremum along a sequence of constant angular momentum .such stellar configurations are unstable since they are found at larger central rest - mass densities than those that on the neutral - point line and therefore marginally stable .while hints of this relation were already discussed in the literature , we have here shown that it holds not only for the maximum value of the angular momentum , but also for any rotation rate .the importance of this universal relation is that it has allowed us to compute the maximum mass sustainable through rapid uniform rotation , finding that for any eos it is about 20% larger than the maximum mass supported by the corresponding nonrotating configuration .finally , using universal relations for some of the properties of compact stars , we have revisited the possibility of constraining the radius of a compact star from the combined measurement of its mass and moment of inertia , as is expected to be possible in a binary system containing a pulsar . in particular , after considering both stellar models in the slow - rotation approximation and in very rapid rotation , we have shown that the dimensionless moment of inertia for slowly rotating compact stars correlates tightly and universally with the stellar compactness in a manner that does not depend on the eos .we have also derived an analytical expression for such a correlation , which improves on a previous expression obtained with the dimensionless moment of inertia . assuming that a measurement of the moment of inertia is made with a realistic precision of 10% and that a much more accurate measurement is made of the mass, we have found that the new relation yields relative errors in the estimate of the stellar radius that are for a large range of masses and moment of inertia .radius measurements of this precision have the potential of setting serious constraints on the eos .interestingly , the universal relation between and is not restricted to slowly rotating models , but can be found also in stellar models that are spinning near the mass - shedding limit . in this case , the universal relation needs to be parameterized in terms of the dimensionless angular momentum , but the functional behaviour is very close to the nonrotating limit also at the most extreme rotation rates .we thank v. ferrari , b. haskell , and j. lattimer for useful discussions and comments .we are also grateful to j. c. miller for a careful read of the paper , to s. typel for essential help with the use of compose , and to the referee for useful comments and suggestions .lr thanks the department of physics of the university of oxford , where part of this work was carried out .partial support comes also from `` newcompstar '' , cost action mp1304 , and by the loewe - program in hic for fair .all of the eoss used here can be found at the eos repository `` compstar online supernovae equations of state ( compose ) '' , at the url compose.obspm.fr .since the posting of this paper , a number of related works have been published or posted . have considered rapidly rotating strange stars and the data in their tables suggest that a universal maximum mass is present also for the quark - matter eoss considered ( i.e. , ) , but additional work is needed to confirm this behaviour . have instead considered compact stars in scalar - tensor theories and gravity , finding that relations similar to the ones reported here are valid also in such theories .as discussed in sec . [ sec : applications ] , we reported in fig .[ fig : err ] the relative error of the radius estimate , , as computed numerically for the ls fit ( dashed lines ) and for the br fit ( continuous lines ) . as first pointed out by the referee , an analytical expression for the ls fitcan also be obtained under a number of simplifying assumptions .to obtain such an expression , we first express the moment of inertia as [ cf . so that , where we have indicated with the various degrees of freedom . the error in the measure of be expressed as ^{1/2}\,.\ ] ] assuming , as in , the error in the moment of inertia is ^{1/2}\,.\ ] ] we can evaluate the two terms in and obtain and a bit of algebra leads to so that expression can be written as where using now and , we rewrite as ^{1/2}\,.\ ] ] recognising now that the first term on the right - hand side of is or , equivalently , that can rewrite it as ^{1/2}\,,\ ] ] so that ^{1/2}\,.\ ] ] we can next assume that the fitting error in the is also very small , so that and thus eq .can be simply written as where the function is defined as so that , once a value for the moment of inertia is fixed , expressions and can be written as a function of the mass only .a similar procedure can be carried out for the fitting in eq . .the behaviour of the denominator in the function shows that the error can diverge for a given value of the compactness . for eq ., which is only a first approximation since it neglects the error in the moment of inertia due to the spread in the eoss , this divergence occurs at when using the coefficients in eq . .however , in the fully numerical analysis carried out in fig .[ fig : err ] , this divergence takes place at smaller compactnesses and around .we also note that because depends on the compactness , there will be different relative errors for different choices of the moment of inertia , as shown in fig .[ fig : err ] .
|
a number of recent works have highlighted that it is possible to express the properties of general - relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities . these functions are normally referred to as `` universal relations '' and have been found to apply , within limits , both to static or stationary isolated stars , as well as to fully dynamical and merging binary systems . further extending the idea that universal relations can be valid also away from stability , we show that a universal relation is exhibited also by equilibrium solutions that are not stable . in particular , the mass of rotating configurations on the turning - point line shows a universal behaviour when expressed in terms of the normalized keplerian angular momentum . in turn , this allows us to compute the maximum mass allowed by uniform rotation , , simply in terms of the maximum mass of the nonrotating configuration , , finding that for all the equations of state we have considered . we further introduce an improvement to previouly published universal relations by lattimer & schutz between the dimensionless moment of inertia and the stellar compactness , which could provide an accurate tool to constrain the equation of state of nuclear matter when measurements of the moment of inertia become available . [ firstpage ] gravitational waves binaries : general stars : neutron .
|
scheduling is a fundamental part of production planning and control .the task of scheduling is the allocation of activities over time to limited resources , where a number of conditions must be preserved .resources represent objects , which can be allocated by activities . using ordered sequences of activities ,basic production flows can be specified .these sequences , which are mainly predetermined by technical or organizational requirements , are specified by jobs .jobs can also specify supplementary conditions , as for example deadlines . in manufacturing ,a job usually models an order .before the method presented in this paper is sketched , the fundamental basic approaches are considered which led to its development.at first , reasons for conceptualizing the method as heuristic are given ( section [ sec : heuristicmethod ] ) .after showing that the consideration and processing of vague data , conditions and objectives are necessary ( section [ sec : necessityintegrationvagueness ] ) , it is shown how the integration of such information can by achieved in a common way ( section [ sec : integrationvaguenesswithfuzzy ] ) . using this potential ,it is also possible to integrate varying , partly vague conditions into the scheduling process ( section [ sec : conditions ] ) .based on this possibility of integration , allocation recommendations can be derived ( section [ sec : recommendations ] ) which can be finally transferred into an allocation decision ( section [ sec : allocationandcontinuation ] ) .nowadays , many manufacturing industries are confronted with large variety in jobs and activities whose processing can be very complex to coordinate or schedule . in practice, the determination of optimal schedules normally leads to np - hard problems in the complexity theoretical sense , for which no efficient algorithms are known .nevertheless the theoretically optimal schedule has mostly only a short time of validity .considering the cost - benefit calculation , it is advisable to use a heuristic method generating an approximately optimal schedule in appropriate time .the considered input variables and parameters in scheduling such as times , lengths of times , quantities and restrictions usually possess an inherent vagueness . often , sharpened data is not available or can only be expensively acquired .in addition , dependences between relevant variables are only known approximately .as a rule , there is a continuous transition between permissible and non - permissible conditions .frequently this fact is ignored .instead vague data or conditions are often sharpened artificially .however , artificial sharpening of data or conditions should usually be advised against .an artificial sharpening leads sometimes to a distorted image of the reality . in the worst casethis leads even to a complete loss of reality .since it is closer to reality , the consideration of vague information is better than the consideration of artificial sharpened information.as logical consequence , it is necessary to integrate the vagueness of naturally vague information into the scheduling process .naturally vague information conveys in their basic form ( but also in a nearly basic form ) a more exact conceivability of its accuracy than in an artificially sharpened form .it usually can be assumed that the scheduling results will be more realistic when using information with a form as close to its basic form as possible .a computer - aided interpretation and processing is only attainable if the underlying modeling and processing are both well defined and equally suitable for sharp and for vague information .vagueness must be processed precisely .for this reason , both the modeling language and the kind of processing must be from a strictly mathematical nature . as a premise , both high comprehensibility and transparency of decisionmust be ensured .in this context , the fuzzy set theory is particularly suitable . with the fuzzy set theory it is possible to map and precisely process both sharp information and not exact quantifiable information ( and vague information respectively ) in a uniform way .scheduling processes are subjected to varying , partly vague conditions and objectives .it concerns production internal conditions as well as production external conditions .for instance , job- and resource - specific conditions ( production internal conditions ) are regarded as well as politically and strategically characterized conditions ( production external conditions).a fundamental approach of the presented method is to provide a possibility to integrate the varying , partly vague , basic conditions directly into the scheduling process and therefore to minimize manual intervention.for this purpose , it is necessary to interlink the context relevant conditions adequately and to use them as a decision basis whenever decisions must be made in the scheduling process.the fuzzy theory offers the possibility to put that into practice using fuzzy approximate reasoning methods as for instance the fuzzy decision support system of _ rommelfanger _ and _ eickemeier _ . with this method ,it is possible to map and precisely process sharp information and not exact quantifiable information ( and vague information respectively ) such as data , conditions and assessments in a uniform way .human decision - making processes can also be integrated into the scheduling process .the feature of human decision - making processes is to get a good solution even if the decision circumstances are complex or poorly structured .that applies also if the underlying information is incomplete , vague or even contradictory .on account of these possibilities , varying , partly vague , basic conditions and objectives can be uniformly integrated into the scheduling process independently of their degree of vagueness . in this way, it is guaranteed that their substantial influence also appears in the scheduling process .the allocation of a job is performed by allocation of all its activities to resources and thereby , the conditions must be considered.a fundamental idea of the presented method is to keep up a greatest possible degree of flexibility as long as possible to be able to generate an approximately optimal schedule.the approach is to first determine the resource - specific optimal sequence of all activities to be allocated . in doing so , a detailed perception of the preferred allocation sequence of every resourceis gained .equipped with this information , resource - comprehensive recommendations for allocations can be determined .after all , an explicit allocation decision can be derived from these recommendations.while determining the recommendations ( both the resource - specific recommendations and the resource - comprehensive recommendations ) , the conditions and objectives described in section [ sec : conditions ] must be considered . since these conditions and objectives can be handled by fuzzy methods in an adequate manner , it is advisable to also determine the recommendations with fuzzy methods .based on the fundamental approaches previously discussed , the initial idea for the following new method was developed .the purpose of this method is to get a nearly optimal schedule within an appropriate time considering the vagueness in the scheduling process adequately .the method itself is designed iteratively using a rolling allocation decision mechanism ( see figure [ fig : method : overview]).since a specific activity is in the following always clearly assigned to a job , a job is an outer wrapper of its activities specifying activity - comprehensive conditions . starting from the jobs and their activities to be allocated , a rough temporal relative arrangement of activities is generated .the generation of this arrangement is based on a fuzzy version of a retrograde scheduling method . in literature , a retrograde scheduling methodis sometimes also called backward scheduling method.with this method , the course of scheduling occurs contrary to the technological course ; starting from the deadline of a job , the latest possible allocation of its activities is realized . since dates , times , and duration of times are often vague , the retrograde scheduling method is extended to be capable of handling fuzzy representations of these temporal parameters.the generated rough temporal relative arrangement is used as optimized input for the following time window based selection .starting from the generated rough temporal relative arrangement of the activities to be scheduled , the activities , which should be considered in a forward - shifted horizon of fixed size , are taken into account by a time window based selection . in this way ,a quantitative restriction of the activities observed by the succeeding steps of the method is performed.this way of proceeding is ostensibly comparable with the load - oriented order release scheduling method , but the concept is different .the load - oriented order release scheduling method is based on the proposition , that a reduction of the average machining time is only possible with a lowering of the average quantity of the prior activity queue ; the presented method uses the time window based selection only to reduce the complexity of the succeeding steps.usually , no complete jobs are represented by the time window based selection of activities .however , manufacturing is primarily job - oriented . for this reason, the list of the selected activities is extended with all unscheduled activities assigned either to the same jobs as the activities picked up by the time window based selection or to only partial allocated jobs of a prior run.in this way , a list of activities is generated which contains all unscheduled activities of jobs which are referenced either by the time window based selection or by a prior run .considering job - specific and comprehensive conditions , all activities of the activity list generated in the previous step are prioritized . amongst other things ,the activities belonging to important jobs are emphasized in contrast to activities of less important jobs .the prioritization is made by a fuzzy rating method basing on the fuzzy decision support system introduced by _rommelfanger _ and _ eickemeier _figure [ fig : rating ] shows an example for a job - specific rating .the priorities assigned to the activities induce a partial order .this partial ordered activity list is used as an initial list for the following activity - orientated resource - specific allocation recommendations .considering the given partial order as much as possible , the activities given by the partial ordered activity list are prioritized resource - specific . in order to provide an evaluation process, the method for the resource - specific allocation recommendation also pays attention to several recently scheduled activities . for this purpose ,overlapping resource - specific time windows are used within the rolling planning process .the activities given by the partial ordered activity list and any recently scheduled activities to consider are aggregated to a new activity list which is then prioritized resource - specific by a fuzzy rating method.if an activity can not be allocated by a specific resource , it is not a member of the corresponding resource - specific activity list to be prioritized ; if an activity can be allocated by several resources it is a member of all corresponding activity lists but usually with different priorities depending on the specific resource .thereby , the resource - specific situation and conditions are considered as well as comprehensive conditions , hard restrictions and objectives.in doing so , a detailed perception of the preferred allocation sequence of every resource is gained . in this way , resource - specific allocation recommendations can be derived .equipped with this information , an optimized resource - comprehensive recommendation for allocations can be determined . from the resource - comprehensive viewpoint the resource - specific allocation recommendations generated in the last step are not necessarily redundancy - free .the corresponding ordered lists may contain activities which are elements of above one of these lists .since every activity must be allocated at most one time , a method for redundancy removal is accomplished .thereby , the consideration of the outer conditions ( the strategic conditions in particular ) is guaranteed by a fuzzy rating method.in order to achieve a preferably balanced utilization simultaneously , a rolling allocation rating process is employed , which use resource - specific time windows .this rating process will be repeated until no changes or no more significant changes will occur . in doing so, resource - comprehensive recommendations for an allocation are derived for every resource .these recommendations are limited by the resource - specific sliding time windows .the resource - comprehensive allocation recommendations determined in the last step are transformed to allocation determinations for the considered activities .subsequently , the allocations are performed.afterwards , all activities that are not allocated are brought into a further scheduling process again .thereby , all allocated activities are no longer considered in the further scheduling process ; if all activities of a job are allocated , this applies also to the corresponding job . in this way, the final schedule containing all activities and jobs is constructed step - by - step .in this paper , a new method was presented , which integrates the vagueness of naturally vague information in specified shape into the scheduling process considering varying , partly vague , basic conditions and conditions.it was shown how it is possible to integrate that important , but usually hardly used , source of information using the fuzzy theory and the techniques developed from it .it was also shown the possibility of the integration of human decision and human assessment processes into the scheduling process.by the conscious use of vague information and the avoidance of over specification , even a complexity reduction can be achieved . for not going beyond the scope of this paper ,the approach of self - organizing activities was not considered .adam , d. ( 1992 ) retrograde terminierung : ein verfahren zur fertigungssteuerung bei diskontinuierlichem materialfluss oder vernetzter fertigung . in : adam d. ( ed ) : fertigungssteuerung grundlagen und systeme .volume * 38/39*. wiesbaden , pp 245262 .borgelt , c. ; kruse , r. ( 2001 ) unsicherheit und vagheit : begriffe , methoden , forschungsthemen . in : ki ,knstliche intelligenz * 3/01*. arendtap , bremen , pp 1824 brucker , p. ( 2001 ) scheduling algorithms .springer , berlin heidelberg new york eiden , w.a .( 2003 ) priorittengesteuertes scheduling auf basis eines multikriteriellen fuzzy - bewertungsverfahrens . technical report .darmstadt university of technology , darmstadt .eiden , w.a .( 2003 ) flexibles scheduling auf basis von fuzzy - technologien . in : geldermann , j. ; rommelfanger , h. ( eds ) : einsatz von fuzzy - sets , neuronalen netzen und knstlicher intelligenz in industrieller produktion und umweltforschung .fortschritt - berichte * 10/725*. vdi , dsseldorf , pp 7084 heitmann , c. ( 2002 ) beurteilung der bestandsfestigkeit von unternehmen mit neuro - fuzzy .phd thesis .peter lang , frankfurt am main nebl , t. ( 2002 ) production management .oldenbourg , mnchen wien rausch , p. ( 1999 ) hiprofit ein konzept zur untersttzung der hierarchischen produktionsplanung mittels fuzzy - clusteranalysen und unscharfer lp - tools .phd thesis .peter lang , frankfurt am main rommelfanger , h. ; eickemeier , s. ( 2002 ) entscheidungstheorie klassische konzepte und fuzzy - erweiterungen .springer , berlin heidelberg .schwab , j. ( 1999 ) logistisches strungsmanagement .44th international scientific colloquium .technical university of ilmenau , ilmenau sibbel , r. ( 1998 ) fuzzy - logik in der fertigungssteuerung am beispiel der retrograden terminierung . lit verlag dr .wilhelm hopf , mnster
|
nowadays , manufacturing industries driven by fierce competition and rising customer requirements are forced to produce a broader range of individual products of rising quality at the same ( or preferably lower ) cost . meeting these demands implies an even more complex production process and thus also an appropriately increasing request to its scheduling . aggravatingly , vagueness of scheduling parameters such as times and conditions are often inherent in the production process . in addition , the search for an optimal schedule normally leads to very difficult problems ( np - hard problems in the complexity theoretical sense ) , which can not be solved efficiently.with the intent to minimize these problems , the introduced heuristic method combines standard scheduling methods with fuzzy methods to get a nearly optimal schedule within an appropriate time considering vagueness adequately .
|
we consider games played between two players on graphs . at every round of the game , each of the two players selects a move ; the moves of the players then determine the transition to the successor state .a play of the game gives rise to a path on the graph .we consider two basic goals for the players : _ reachability , _ and _ safety ._ in the reachability goal , player 1 must reach a set of target states or , if randomization is needed to play the game , then player 1 must maximize the probability of reaching the target set . in the safety goal ,player 1 must ensure that a set of target states is never left or , if randomization is required , then player 1 must ensure that the probability of leaving the target set is as low as possible .the two goals are dual , and the games are determined : the maximal probability with which player 1 can reach a target set is equal to one minus the maximal probability with which player 2 can confine the game in the complement set .these games on graphs can be divided into two classes : _ turn - based _ and _ concurrent . _ in turn - based games , only one player has a choice of moves at each state ; in concurrent games , at each state both players choose a move , simultaneously and independently , from a set of available moves . for turn - based games , the solution of games with reachability and safety goals has long been known .if the move played determines uniquely the successor state , the games can be solved in linear - time in the size of the game graph .if the move played determines a probability distribution over the successor state , the problem of deciding whether a safety of reachability can be won with probability greater than ] can be decided in pspace via a reduction to the theory of the real closed field .this yields a binary - search algorithm to approximate the value .this approach is theoretical , but complex due to the complex decision algorithms for the theory of reals . thus far , the only practical approach to the solution of concurrent safety and reachability games has been via value iteration , and via strategy improvement for reachability games . in was shown how to construct a series of valuations that approximates from below , and converges , to the value of a reachability game ; the same algorithm provides valuations converging from above to the value of a safety game . in , it was shown how to construct a series of strategies for reachability games that converge towards optimality . neither scheme is guaranteed to terminate , not even strategy improvement , since in general only -optimal strategies are guaranteed to exist .both of these approximation schemes lead to practical algorithms . the problem with both schemes ,however , is that they provide only _ lower _ bounds for the value of reachability games , and only _ upper _ bounds for the value of safety games . as no bounds are available for the speed of convergence of these algorithms , the question of how to derive the matching boundshas so far been open . in this paper , we present the first strategy improvement algorithm for the solution of concurrent safety games . given a safety goal for player 1 , the algorithm computes a sequence of memoryless , randomized strategies for player 1 that converge towards optimality .albeit memoryless randomized optimal strategies exist for safety goals , the strategy improvement algorithm may not converge in finitely many iterations : indeed , optimal strategies may require moves to be played with irrational probabilities , while the strategies produced by the algorithm play moves with probabilities that are rational numbers .the main significance of the algorithm is that it provides a converging sequence of _ lower _ bounds for the value of a safety game , and dually , of _ upper _ bounds for the value of a reachability game . to obtain such bounds, it suffices to compute the value provided by at a state , for .once is fixed , the game is reduced to a markov decision process , and the value of the safety game can be computed at all e.g. via linear programming .thus , together with the value or strategy improvement algorithms of , the algorithm presented in this paper provides the first practical way of computing converging lower and upper bounds for the values of concurrent reachability and safety games .we also present a detailed analysis of termination criteria for turn - based stochastic games , and obtain an improved upper bound for termination for turn - based stochastic games . the strategy improvement algorithm for reachability games of based on locally improving the strategy on the basis of the valuation it yields .this approach does not suffice for safety games : the sequence of strategies obtained would yield increasing values to player 1 , but these value would not necessarily converge to the value of the game . in this paper , we introduce a novel , and non - local , improvement step , which augments the standard value - based improvement step .the non - local step involves the analysis of an appropriately - constructed turn - based game . as value iteration for safety games converges from above , while our sequences of strategies yields values that converge from below , the proof of convergence for our algorithm can not be derived from a connection with value iteration , as was the case for reachability games .thus , we developed new proof techniques to show both the monotonicity of the strategy values produced by our algorithm , and to show convergence to the value of the game .[ [ notation . ] ] notation .+ + + + + + + + + for a countable set , a _ probability distribution _ on is a function |\trans|=\sum_{s\in s_r , t\in s } bits required to specify the transition probability .we denote by the size of the game graph , and .[ [ plays . ] ] plays .+ + + + + + a _ play _ of is an infinite sequence of states in such that for all , there are moves and with .we denote by the set of all plays , and by the set of all plays such that , that is , the set of plays starting from state .[ [ selectors - and - strategies . ] ] selectors and strategies .+ + + + + + + + + + + + + + + + + + + + + + + + + a _ selector _ for player is a function such that for all states and moves , if , then .a selector for player at a state is a distribution over moves such that if , then . we denote by the set of all selectors for player , and similarly , we denote by the set of all selectors for player at a state .the selector is _ pure _ if for every state , there is a move such that . a _strategy _ for player is a function that associates with every finite , nonempty sequence of states , representing the history of the play so far , a selector for player ; that is , for all and , we have . the strategy is _ pure _ if it always chooses a pure selector ; that is , for all , there is a move such that . a _strategy is independent of the history of the play and depends only on the current state .memoryless strategies correspond to selectors ; we write for the memoryless strategy consisting in playing forever the selector .a strategy is _ pure memoryless _ if it is both pure and memoryless . in a turn - based stochastic game ,a strategy for player 1 is a function , such that for all and for all we have .memoryless strategies and pure memoryless strategies are obtained as the restriction of strategies as in the case of concurrent game graphs .the family of strategies for player 2 are defined analogously .we denote by and the sets of all strategies for player and player , respectively .we denote by and the sets of memoryless strategies and pure memoryless strategies for player , respectively .[ [ destinations - of - moves - and - selectors . ] ] destinations of moves and selectors .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for all states and moves and , we indicate by the set of possible successors of when the moves and are chosen . given a state , and selectors and for the two players , we denote by the set of possible successors of with respect to the selectors and .once a starting state and strategies and for the two players are fixed , the game is reduced to an ordinary stochastic process .hence , the probabilities of events are uniquely defined , where an _ event _ is a measurable set of plays . for an event ,we denote by the probability that a play belongs to when the game starts from and the players follows the strategies and .similarly , for a measurable function , we denote by the expected value of when the game starts from and the players follow the strategies and . for , we denote by the random variable denoting the -th state along a play . [ [ valuations . ] ] valuations .+ + + + + + + + + + + a _ valuation _ is a mapping ] with each state . given two valuations ,we write when for all states . for an event , we denote by the valuation ] , we denote by the valuation $ ] defined for all by .[ [ reachability - and - safety - objectives . ] ] reachability and safety objectives .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a set of _ safe _ states , the objective of a safety game consists in never leaving .therefore , we define the set of winning plays as the set . given a subset of _ target _ states , the objective of a reachability game consists in reaching .correspondingly , the set winning plays is of plays that visit .for all and , the sets and is measurable .an objective in general is a measurable set , and in this paper we would consider only reachability and safety objectives . for an objective , the probability of satisfying from a state under strategies and for players 1 and 2 , respectively , is .we define the _ value _ for player 1 of game with objective from the state as i.e. , the value is the maximal probability with which player 1 can guarantee the satisfaction of against all player 2 strategies .given a player-1 strategy , we use the notation a strategy for player 1 is _ optimal _ for an objective if for all states , we have for , a strategy for player 1 is _ -optimal _ if for all states , we have the notion of values and optimal strategies for player 2 are defined analogously .reachability and safety objectives are dual , i.e. , we have . the quantitative determinacy result of ensures that for all states , we have [ thrm : memory - determinacy ] for all concurrent game graphs , for all , such that , the following assertions hold . 1 . memoryless optimal strategies exist for safety objectives . for all , memoryless -optimal strategies exist for reachability objectives .3 . if is a turn - based stochastic game graph , then pure memoryless optimal strategies exist for reachability objectives and safety objectives .to develop our arguments , we need some facts about one - player versions of concurrent stochastic games , known as _ markov decision processes _ ( mdps ) . for , a _player- mdp _( for short , -mdp ) is a concurrent game where , for all states , we have . given a concurrent game , if we fix a memoryless strategy corresponding to selector for player 1 , the game is equivalent to a 2-mdp with the transition function for all and .similarly , if we fix selectors and for both players in a concurrent game , we obtain a markov chain , which we denote by . [[ end - components . ] ] end components .+ + + + + + + + + + + + + + + in an mdp , the sets of states that play an equivalent role to the closed recurrent classes of markov chains are called `` end components '' . an _ end component _ of an -mdp , for ,is a subset of the states such that there is a selector for player so that is a closed recurrent class of the markov chain .it is not difficult to see that an equivalent characterization of an end component is the following . for each state , there is a subset of moves such that : 1 . _( closed ) _ if a move in is chosen by player at state , then all successor states that are obtained with nonzero probability lie in ; and 2 . _ ( recurrent ) _ the graph , where consists of the transitions that occur with nonzero probability when moves in are chosen by player , is strongly connected . given a play , we denote by the set of states that occurs infinitely often along . given a set of subsets of states , we denote by the event .the following theorem states that in a 2-mdp , for every strategy of player 2 , the set of states that are visited infinitely often is , with probability 1 , an end component .corollary [ coro : prob1 ] follows easily from theorem [ theo - ec ] . [ theo - ec ] for a player-1 selector , let be the set of end components of a 2-mdp . for all player-2 strategies and all states , we have .[ coro : prob1 ] for a player-1 selector , let be the set of end components of a 2-mdp , and let be the set of states of all end components . for all player-2 strategies and all states , we have . [[ subsec : mdpreach ] ] mdps with reachability objectives .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a 2-mdp with a reachability objective for player 2 , where , the values can be obtained as the solution of a linear program .the linear program has a variable for all states , and the objective function and the constraints are as follows : the correctness of the above linear program to compute the values follows from .in this section we present a strategy improvement algorithm for concurrent games with safety objectives . the algorithm will produce a sequence of selectors for player 1 , such that : 1 .[ l - improve-1 ] for all , we have ; 2 .[ l - improve-3 ] if there is such that , then ; and 3 .[ l - improve-2 ] .condition [ l - improve-1 ] guarantees that the algorithm computes a sequence of monotonically improving selectors .condition [ l - improve-3 ] guarantees that if a selector can not be improved , then it is optimal .condition [ l - improve-2 ] guarantees that the value guaranteed by the selectors converges to the value of the game , or equivalently , that for all , there is a number of iterations such that the memoryless player-1 strategy is -optimal .note that for concurrent safety games , there may be no such that , that is , the algorithm may fail to generate an optimal selector .this is because there are concurrent safety games such that the values are irrational .we start with a few notations * the operator and optimal selectors . * given a valuation , and two selectors and , we define the valuations , , and as follows , for all states : intuitively , is the greatest expectation of that player 1 can guarantee at a successor state of . also note that given a valuation , the computation of reduces to the solution of a zero - sum one - shot matrix game , and can be solved by linear programming .similarly , is the greatest expectation of that player 1 can guarantee at a successor state of by playing the selector .note that all of these operators on valuations are monotonic : for two valuations , if , then for all selectors and , we have , , and . given a valuation and a state , we define by the set of optimal selectors for at state .for an optimal selector , we define the set of counter - optimal actions as follows : observe that for , for all we have .we define the set of optimal selector support and the counter - optimal action set as follows : i.e. , it consists of pairs of actions of player 1 and player 2 , such that there is an optimal selector with support , and is the set of counter - optimal actions to .* turn - based reduction . * given a concurrent game and a valuation we construct a turn - based stochastic game as follows : 1 .the set of states is as follows : 2 .the state space partition is as follows : ; ; and 3 .the set of edges is as follows : 4 .the transition function for all states in is uniform over its successors .intuitively , the reduction is as follows . given the valuation , state is a player 1 state where player 1 can select a pair ( and move to state ) with and such that there is an optimal selector with support exactly and the set of counter - optimal actions to is the set . from a player 2 state ,player 2 can choose any action from the set , and move to state .a state is a probabilistic state where all the states in are chosen uniformly at random .given a set we denote by .we refer to the above reduction as , i.e. , .* value - class of a valuation . * given a valuation and a real , the _ value - class _ of value is the set of states with valuation , i.e. , [ [ ordering - of - strategies . ] ] ordering of strategies .+ + + + + + + + + + + + + + + + + + + + + + + let be a concurrent game and be the set of safe states .let .given a concurrent game graph with a safety objective , the set of _ almost - sure winning _ states is the set of states such that the value at is , i.e. , is the set of almost - sure winning states .an optimal strategy from is referred as an almost - sure winning strategy .the set and an almost - sure winning strategy can be computed in linear time by the algorithm given in .we assume without loss of generality that all states in are absorbing .we define a preorder on the strategies for player 1 as follows : given two player 1 strategies and , let if the following two conditions hold : ( i ) ; and ( ii ) for some state . furthermore , we write if either or .we first present an example that shows the improvements based only on operators are not sufficient for safety games , even on turn - based games and then present our algorithm .[ examp : conc - safety ] consider the turn - based stochastic game shown in fig [ fig : example - tbs ] , where the states are player 1 states , the states are player 2 states , and states are random states with probabilities labeled on edges .the safety goal is to avoid the state .consider a memoryless strategy for player 1 that chooses the successor , and the counter - strategy for player 2 chooses .given the strategies and , the value at and is , and since all successors of have value , the value can not be improved by . however , note that if player 2 is restricted to choose only value optimal selectors for the value , then player 1 can switch to the strategy and ensure that the game stays in the value class with probability 1 . hence switching to would force player 2 to select a counter - strategy that switches to the strategy , and thus player 1 can get a value . [[ informal - description - of - algorithmalgorithmstrategy - improve - safe . ] ] informal description of algorithm [ algorithm : strategy - improve - safe ] .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we now present the strategy improvement algorithm ( algorithm [ algorithm : strategy - improve - safe ] ) for computing the values for all states in .the algorithm iteratively improves player-1 strategies according to the preorder .the algorithm starts with the random selector that plays at all states all actions uniformly at random . at iteration , the algorithm considers the memoryless player-1 strategy and computes the value .observe that since is a memoryless strategy , the computation of involves solving the 2-mdp .the valuation is named . for all states such that , the memoryless strategy at is modified to a selector that is value - optimal for .the algorithm then proceeds to the next iteration .if , then the algorithm constructs the game , and computes as the set of almost - sure winning states in for the objective .let .if is non - empty , then a selector is obtained at from an pure memoryless optimal strategy ( i.e. , an almost - sure winning strategy ) in , and the algorithm proceeds to iteration . if and is empty , then the algorithm stops and returns the memoryless strategy for player 1 .unlike strategy improvement algorithms for turn - based games ( see for a survey ) , algorithm [ algorithm : strategy - improve - safe ] is not guaranteed to terminate , because the value of a safety game may not be rational .aaa = aaa = aaa = aaa = aaa = aaa = aaa = aaa + a concurrent game structure with safe set .+ a strategy for player 1 .compute .let and .compute .do \ { * + 3.1 .let .+ 3.2 * if * , * then * + 3.2.1 let be a player-1 selector such that for all states , + we have .+ 3.2.2 the player-1 selector is defined as follows : for each state , let + + 3.3 * else * + 3.3.1 let + 3.3.2 let be the set of almost - sure winning states in for and + be a pure memoryless almost - sure winning strategy from the set .+ 3.3.3 * if * ( ) + 3.3.3.1 let + 3.3.3.2 the player-1 selector is defined as follows : for , let + + 3.4 .compute .let .+ and .return * .[ lemm : stra - improve - safe1 ] let and be the player-1 selectors obtained at iterations and of algorithm [ algorithm : strategy - improve - safe ] .let .let and .then for all states ; and therefore for all states , and for all states .consider the valuations and obtained at iterations and , respectively , and let be the valuation defined by for all states .the counter - optimal strategy for player 2 to minimize is obtained by maximizing the probability to reach .let in other words , , and we also have .we now show that is a feasible solution to the linear program for mdps with the objective , as described in section [ sec : mdp ] . since , it follows that for all states and all moves , we have for all states , we have and , and since , it follows that for all states and all moves , we have since for the selector is obtained as an optimal selector for , it follows that for all states and all moves , we have in other words , .hence for all states and all moves , we have since , for all states and all moves , we have hence it follows that is a feasible solution to the linear program for mdps with reachability objectives .since the reachability valuation for player 2 for is the least solution ( observe that the objective function of the linear program is a minimizing function ) , it follows that .thus we obtain for all states , and for all states .recall that by example [ examp : conc - safety ] it follows that improvement by only step 3.2 is not sufficient to guarantee convergence to optimal values .we now present a lemma about the turn - based reduction , and then show that step 3.3 also leads to an improvement .finally , in theorem [ thrm : safe - termination ] we show that if improvements by step 3.2 and step 3.3 are not possible , then the optimal value and an optimal strategy is obtained .[ lemm : stra - improve - safetb ] let be a concurrent game with a set of safe states .let be a valuation and consider .let be the set of almost - sure winning states in for the objective , and let be a pure memoryless almost - sure winning strategy from in .consider a memoryless strategy in for states in as follows : if , then such that and .consider a pure memoryless strategy for player 2 . if for all states , we have , then for all , we have .we analyze the markov chain arising after the player fixes the memoryless strategies and .given the strategy consider the strategy as follows : if and , then at state choose the successor . since is an almost - sure winning strategy for , it follows that in the markov chain obtained by fixing and in , all closed connected recurrent set of states that intersect with are contained in , and from all states of the closed connected recurrent set of states within are reached with probability 1 .it follows that in the markov chain obtained from fixing and in all closed connected recurrent set of states that intersect with are contained in , and from all states of the closed connected recurrent set of states within are reached with probability 1 .the desired result follows .[ lemm : stra - improve - safe2 ] let and be the player-1 selectors obtained at iterations and of algorithm [ algorithm : strategy - improve - safe ] .let , and .let and .then for all states , and for some state .we first show that .let .let for all states .since , it follows that for all states and all moves , we have the selector chosen for at satisfies that .it follows that for all states and all moves , we have it follows that the maximal probability with which player 2 can reach against the strategy is at most .it follows that .we now argue that for some state we have .given the strategy , consider a pure memoryless counter - optimal strategy for player 2 to reach . since the selectors at states are obtained from the almost - sure strategy in the turn - based game to satisfy , it follows from lemma [ lemm : stra - improve - safetb ] that if for every state , the action , then from all states , the game stays safe in with probability 1 .since is a given strategy for player 1 , and is counter - optimal against , this would imply that .this would contradict that and .it follows that for some state we have , and since we have in other words , we have define a valuation as follows : for , and . hence , and given the strategy and the counter - optimal strategy , the valuation satisfies the inequalities of the linear - program for reachability to .it follows that the probability to reach given is at most . since , it follows that for all , and .this concludes the proof .we obtain the following theorem from lemma [ lemm : stra - improve - safe1 ] and lemma [ lemm : stra - improve - safe2 ] that shows that the sequences of values we obtain is monotonically non - decreasing .[ thrm : safe - mono ] for , let and be the player-1 selectors obtained at iterations and of algorithm [ algorithm : strategy - improve - safe ] .if , then .[ thrm : safe - termination ] let be the valuation at iteration of algorithm [ algorithm : strategy - improve - safe ] such that .if , and , then is an optimal strategy and .we show that for all memoryless strategies for player 1 we have . since memoryless optimal strategies exist for concurrent games with safety objectives ( theorem [ thrm : memory - determinacy ] ) the desired result follows .let be a pure memoryless optimal strategy for player 2 in for the objective complementary to , where .consider a memoryless strategy for player 1 , and we define a pure memoryless strategy for player 2 as follows . 1 . if , then , such that ; ( such a exists since ) .if , then let , and consider such that . then we have , such that . observe that by construction of , for all , we have . we first show that in the markov chain obtained by fixing and in , there is no closed connected recurrent set of states such that .assume towards contradiction that is a closed connected recurrent set of states in .the following case analysis achieves the contradiction . 1 .suppose for every state we have .then consider the strategy in such that for a state we have , where , and .since is closed connected recurrent states , it follows by construction that for all states in the game we have , where .it follows that for all in we have .since is an optimal strategy , it follows that .this contradicts that .2 . otherwise for some state we have .let , i.e. , is the least value - class with non - empty intersection with .hence it follows that for all , we have . observe that since for all we have , it follows that for all either ( a ) ; or ( b ) , for some .since is the least value - class with non - empty intersection with , it follows that for all we have .it follows that .consider the state such that . by the construction of , we have .hence we must have , for some .thus we have a contradiction .it follows from above that there is no closed connected recurrent set of states in , and hence with probability 1 the game reaches from all states in . hence the probability to satisfy is equal to the probability to reach . since for all states we have , it follows that given the strategies and , the valuation satisfies all the inequalities for linear program to reach .it follows that the probability to reach from is atmost .it follows that for all we have . the result follows .* convergence .* we first observe that since pure memoryless optimal strategies exist for turn - based stochastic games with safety objectives ( theorem [ thrm : memory - determinacy ] ) , for turn - based stochastic games it suffices to iterate over pure memoryless selectors .since the number of pure memoryless strategies is finite , it follows for turn - based stochastic games algorithm [ algorithm : strategy - improve - safe ] always terminates and yields an optimal strategy . for concurrent games, we will use the result that for , there is a _ -uniform memoryless _ strategy that achieves the value of a safety objective with in .we first define -uniform memoryless strategies .a selector for player 1 is _if for all and all there exists such that and , i.e. , the moves in the support are played with probability that are multiples of with .[ lemm : kuniform ] for all concurrent game graphs , for all safety objectives , for , for all , there exist -uniform selectors such that is an -optimal strategy for , where . _( sketch ) ._ for a rational , using the results of , it can be shown that whether can be expressed in the quantifier free fragment of the theory of reals . then using the formula in the theory of reals and theorem 13.12 of , it can be shown that if there is a memoryless strategy that achieves value at least , then there is a -uniform memoryless strategy that achieves value at least , where , for . * strategy improvement with -uniform selectors .* we first argue that if we restrict algorithm [ algorithm : strategy - improve - safe ] such that every iteration yields a -uniform selector , then the algorithm terminates .if we restrict to -uniform selectors , then a concurrent game graph can be converted to a turn - based stochastic game graph , where player 1 first chooses a -uniform selector , then player 2 chooses an action , and then the transition is determined by the chosen -uniform selector of player 1 , the action of player 2 and the transition function of the game graph . then by termination of turn - based stochastic games it follows that the algorithm will terminate . given ,let us denote by the valuation of algorithm [ algorithm : strategy - improve - safe ] at iteration , where the selectors are restricted to be -uniform , and is the valuation of algorithm [ algorithm : strategy - improve - safe ] at iteration .since is obtained without any restriction , it follows that for all , for all , we have . from lemma [ lemm : kuniform ]it follows that for all , there exists a and such that for all we have .this gives us the following result .let be the valuation obtained at iteration of algorithm [ algorithm : strategy - improve - safe ] .then the following assertions hold . 1. for all , there exists such that for all we have .2 . .* complexity .* algorithm [ algorithm : strategy - improve - safe ] may not terminate in general .we briefly describe the complexity of every iteration . given a valuation , the computation of involves solution of matrix games with rewards and can be computed in polynomial time using linear - programming . given and , the set and can be computed by enumerating the subsets of available actions at and then using linear - programming : for example to check it suffices to check that there is an selector such that is optimal ( i.e. for all actions we have ) ; for all we have , and for all we have ; and to check is the set of counter - optimal actions we check that for we have ; and for we have .all the above can be solved by checking feasibility of a set of linear inequalities .hence can be computed in time polynomial in size of and and exponential in the number of moves .the set of almost - sure winning states in turn - based stochastic games with safety objectives can be computed in linear - time .in this section we present termination criteria for strategy improvement algorithms for concurrent games for -approximation , and then present an improved termination condition for turn - based games . * termination for concurrent games . * a strategy improvement algorithm for reachability games was presented in .we refer to the algorithm of as the _ reachability strategy improvement algorithm_. the reachability strategy improvement algorithm is simpler than algorithm [ algorithm : strategy - improve - safe ] : it is similar to algorithm [ algorithm : strategy - improve - safe ] and in every iteration only step 3.2 is executed ( and step 3.3 need not be executed ) . applying the reachability strategy improvement algorithm of for player 2 , for a reachability objective , we obtain a sequence of valuations such that ( a ) ; ( b ) if , then ; and ( c ) . given a concurrent game with and , we apply the reachability strategy improvement algorithm to obtain the sequence of valuation as above , and we apply algorithm [ algorithm : strategy - improve - safe ] to obtain a sequence of valuation .the termination criteria are as follows : 1 .if for some we have , then we have , and , and we obtain the values of the game ; 2 . if for some we have , then we have , and , and we obtain the values of the game ; and 3 . for , if for some , we have , then for all we have and ( i.e. , the algorithm can stop for -approximation ) .observe that since and are both monotonically non - decreasing and , it follows that if , then forall we have and .this establishes that and ; and the correctness of the stopping criteria ( 3 ) for -approximation follows .we also note that instead of applying the reachability strategy improvement algorithm , a value - iteration algorithm can be applied for reachability games to obtain a sequence of valuation with properties similar to and the above termination criteria can be applied .let be a concurrent game graph with a safety objective .algorithm [ algorithm : strategy - improve - safe ] and the reachability strategy improvement algorithm for player 2 for the reachability objective yield sequence of valuations and , respectively , such that ( a ) for all , we have ; and ( b ) . * termination for turn - based games . * for turn - based stochastic gamesalgorithm [ algorithm : strategy - improve - safe ] and as well as the reachability strategy improvement algorithm terminates .each iteration of the reachability strategy improvement algorithm of is computable in polynomial time , and here we present a termination guarantee for the reachability strategy improvement algorithm . to apply the reachability strategy improvement algorithm we assume the objective of player 1 to be a reachability objective , and the correctness of the algorithm relies on the notion of _ proper strategies_. let .then the notion of proper strategies and its properties are as follows .a player-1 strategy is _ proper _ if for all player-2 strategies , and for all states , we have . a player-1 selector is _ proper _ if the memoryless player-1 strategy is proper .let be a turn - based stochastic game with reachability objective for player 1 .let be the initial selector , and be the selector obtained at iteration of the reachability strategy improvement algorithm .if is a pure , proper selector , then the following assertions hold : 1 . for all , we have is a pure , proper selector ; 2 . for all , we have , where and ; and 3 . if , then , and there exists such that .the strategy improvement algorithm of condon works only for _ halting games _ , but the reachability strategy improvement algorithm works if we start with a pure , proper selector for reachability games that are not halting . hence to use the reachability strategy improvement algorithm to compute values we need to start with a pure , proper selector .we present a procedure to compute a pure , proper selector , and then present termination bounds ( i.e. , bounds on such that ) . the construction of pure , proper selector is based on the notion of _ attractors _ defined below. _ attractor strategy ._ let , and for we have since for all we have , it follows that from all states in player 1 can ensure that is reached with positive probability .it follows that for some we have .the pure _ attractor _selector is as follows : for a state we have , where ( such a exists by construction ) .the pure memoryless strategy ensures that for all , from the game reaches with positive probability . hence there is no end - component contained in in the mdp .it follows that is a pure selector that is proper , and the selector can be computed in time .this completes the reachability strategy improvement algorithm for turn - based stochastic games .we now present the termination bounds ._ termination bounds ._ we present termination bounds for binary turn - based stochastic games .a turn - based stochastic game is binary if for all we have , and for all if , then for all we have , i.e. , for all probabilistic states there are at most two successors and the transition function is uniform .the results follow as a special case of lemma 2 of .lemma 2 of holds for halting turn - based stochastic games , and since markov chains reaches the set of closed connected recurrent states with probability 1 from all states the result follows . since pure memoryless optimal strategies exist for both players ( theorem [ thrm : memory - determinacy ] ) , we fix pure memoryless optimal strategies and for both players .the markov chain can be then reduced to an equivalent markov chains with states ( since we fix deterministic successors for states in , they can be collapsed to their successors ) .the result then follows from lemma [ lemm : mc - bound ] . from lemma [ lemm : tb - bound ]it follows that at iteration of the reachability strategy improvement algorithm either the sum of the values either increases by or else there is a valuation such that .since the sum of values of all states can be at most , it follows that algorithm terminates in at most steps .moreover , since the number of pure memoryless strategies is at most , the algorithm terminates in at most steps .it follows from the results of that a turn - based stochastic game graph can be reduced to a equivalent binary turn - based stochastic game graph such that the set of player 1 and player 2 states in and are the same and the number of probabilistic states in is , where is the size of the transition function in .thus we obtain the following result .let be a turn - based stochastic game with a reachability objective , then the reachability strategy improvement algorithm computes the values in time where is polynomial function .the results of presented an algorithm for turn - based stochastic games that works in time .the algorithm of works only for turn - based stochastic games , for general turn - based stochastic games the complexity of the algorithm of is better .however , for turn - based stochastic games where the transition function at all states can expressed in constant bits we have . in these cases the reachability strategy improvement algorithm ( that works for both concurrent and turn - based stochastic games ) works in time as compared to the time of the algorithm of .a. bianco and l. de alfaro .model checking of probabilistic and nondeterministic systems . in _fsttcs 95 : software technology and theoretical computer science _, volume 1026 of _ lecture notes in computer science _ , pages 499513 .springer - verlag , 1995 .a. condon . on algorithms for simple stochastic games . in _ advances in computational complexity, volume 13 of _ dimacs series in discrete mathematics and theoretical computer science _ , pages 5173 .american mathematical society , 1993 .
|
we consider concurrent games played on graphs . at every round of the game , each player simultaneously and independently selects a move ; the moves jointly determine the transition to a successor state . two basic objectives are the safety objective : `` stay forever in a set of states '' , and its dual , the reachability objective , `` reach a set of states '' . we present in this paper a strategy improvement algorithm for computing the _ value _ of a concurrent safety game , that is , the maximal probability with which player 1 can enforce the safety objective . the algorithm yields a sequence of player-1 strategies which ensure probabilities of winning that converge monotonically to the value of the safety game . the significance of the result is twofold . first , while strategy improvement algorithms were known for markov decision processes and turn - based games , as well as for concurrent reachability games , this is the first strategy improvement algorithm for concurrent safety games . second , and most importantly , the improvement algorithm provides a way to approximate the value of a concurrent safety game _ from below _ ( the known value - iteration algorithms approximate the value from above ) . thus , when used together with value - iteration algorithms , or with strategy improvement algorithms for reachability games , our algorithm leads to the first practical algorithm for computing converging upper and lower bounds for the value of reachability and safety games .
|
the privacy amplification is a technique to distill a secret key from a source that is partially known to an eavesdropper , usually referred to as eve .the privacy amplification is regarded as an indispensable tool in the information theoretic security , and it has been studied in many literatures ( eg . ) .recently , the non - asymptotic analysis of coding problems has attracted considerable attention . especially for the channel coding problem , the relation between various types of non - asymptotic boundsare extensively compared in .the performance of the privacy amplification is typically characterized by the smooth minimum entropy or the inf - spectral entropy .there is also another approach , the exponential bound , which has been investigated by one of the authors .so far , the relation between non - asymptotic bounds derived by these two approaches has not been clarified .the first purpose of this paper is to compare the min - entropy bound , which is derived by the smooth minimum entropy framework , and the exponential bound . actually , it turns out that the exponential bound is better than the min - entropy bound when a security parameter is rather small for a block length . in the following ,we explain a reason for this result . in the achievability part of the smooth entropy framework or the information spectrum approach , a performance criterion of a problem , such as an error probability or a security parameter , is usually upper bounded by a formula consisting of two terms .one of the terms is caused by the smoothing error , which corresponds to a tail probability of atypical outcomes .the other is caused by typical outcomes . in the following ,let us call the former one the type 1 error term and the latter one the type 2 error term respectively . to derive a tight bound in total ,we need to tightly bound both the type 1 and type 2 error terms .this fact has been recognized in literatures .indeed , one of the authors derived the state - of - the - art error exponent for the classical - quantum channel coding by tightly bounding both types of error terms .in , polyanskiy _ et .al . _ derived a non - asymptotic bound of the channel coding , which is called the dt bound , by tightly bounding both types of error terms .the dt bound remarkably improves on the so - called feinstein bound because the type 2 error term is loosely bounded in the feinstein bound .the improvement is especially remarkable when a required error probability is rather small for a block length . for the privacy amplification problem , one of the authors derived the state - of - the - art exponent of the variational distance by tightly bounding both types of error terms . on the other hand ,the type 2 error term is loosely bounded in the bound derived via the smooth minimum entropy . as is expected from the above argument, the exponential bound turns out to be better than the min - entropy bound when a security parameter is rather small for a block length . for rather large security parameters ,the min - entropy bound is better than the exponential bound .this is because we derive the exponential bound by using the large deviation technique .the large deviation technique is only tight when a threshold of a tail probability is away from the average , and this is not the case when a security parameter is rather large for a block length . as the second purpose of this paper , we derive a bound that interpolates the exponential bound and the min - entropy bound .this is done by a hybrid use of the rnyi entropy and the inf - spectral entropy .it turns out that the hybrid bound is better than both the exponential bound and the min - entropy bound for whole ranges of security parameters .the rest of the paper is organized as follows . in section [ section : preliminaries ] , we summarize known bounds on the privacy amplification . in section [ section : hybrid ] ,we propose a novel bound by using the rnyi entropy and the inf - spectral entropy . in section [ section : numerical - calcuation ] , we compare the bounds numerically .in this section , we review the problem setting and known results on the privacy amplification . although most of results in this section were stated explicitly or implicitly in literatures , we restate them for reader s convenience .especially , theorem [ theorem : one - shot ] , theorem [ theorem : spectrum ] , and theorem [ theorem : gaussian - approximation ] are classical analogue of those obtained in for the quantum setting , where the distance to evaluate the smoothing is different . for a set ,let be the set of all probability distribution on .it is also convenient to introduce the set of all sub - normalized non - negative functions .let be a sub - normalized non - negative function .for a function and the key , let we define the security by where is the uniform distribution on and for .although the quantity has no operational meaning for unnormalized , it will be used to derive bounds on for normalized . for distribution and security parameter , we are interested in characterizing in this section , we review the smooth minimum entropy framework that was mainly introduced and developed by renner and his collaborators . for and a normalized , we define then , we define where we also define where the following is a key lemma to derive every lower bound on .[ lemma : left - over ] let be the uniform random variable on a set of universal 2 hash family .then , for and , we have must be such that . ] \le \frac{1}{2 } \sqrt{|{\cal s}| e^{- h_2(p_{xz}|r_z)}},\end{aligned}\ ] ] where is the conditional rnyi entropy of order relative to .since , we have the following . for and , we have \le \frac{1}{2 } \sqrt{|{\cal s}| e^{- h_{\min}(p_{xz}|r_z)}}.\end{aligned}\ ] ] furthermore ,since holds for by the triangular inequality , we have the following . [ corollary : smooth - entropy - bound ] for and , we have \le 2 \varepsilon + \frac{1}{2 } \sqrt{|{\cal s}| e^{- \bar{h}_{\min}^\varepsilon(p_{xz}|r_z)}}.\end{aligned}\ ] ] the following is a key lemma to derive a upper bound on .[ lemma : monotonicity ] for any function , , and , we have see appendix [ appendix : lemma : monotonicity ] . when eve s side - information is the quantum density operator instead of the random variable , the monotonicity of the smooth minimum entropy was proved in ( * ?* proposition 3 ) , where the smoothing is evaluated by the so - called purified distance instead of the trace distance . for the quantum setting and the trace distance, it is not clear whether the monotonicity holds or not because we can not apply uhlmann s theorem to the trace distance directly . from corollary [ corollary :smooth - entropy - bound ] and lemma [ lemma : monotonicity ] , we get the following lower and upper bounds on . [theorem : one - shot ] for any , we have in this section , we introduce the inf - spectral entropy .the quantity is used to calculate the lower and upper bounds in theorem [ theorem : one - shot ] . for and ,let be the conditional inf - spectral entropy relative to .the following two lemmas relate the quantities and .[ lemma : spectrum - direct - bound ] for and , we have see appendix [ proof : lemma : spectrum - direct - bound ] .[ lemma : spectrum - converse - bound ] for , we have for any . see appendix [ proof : lemma : spectrum - converse - bound ] . from theorem [ theorem : one - shot ] , lemma [ lemma : spectrum - direct - bound ] and lemma [ lemma : spectrum - converse - bound ], we have the following . [ theorem : spectrum ] for any and , we have in this section , we consider the asymptotic setting . by applying the berry - essen theorem to theorem [ theorem : spectrum ] , we have the following gaussian approximation of . [ theorem : gaussian - approximation ] let be the dispersion of the conditional log likelihood .then , we have where is the cumulative distribution function of the standard gaussian random variable . in this section ,we review the exponential bounds . for ,let we have the following .[ theorem : large - deviation - bound ] for any , we have \le \frac{3}{2 } |{\cal s}|^\rho e^{\phi(\rho|p_{xz})}.\end{aligned}\ ] ] for , , and , let be the conditional rnyi entropy of order relative to . for , we define by using jensen s inequality and by setting , we have thus , we have the following slightly looser bound . [corollary : large - deviation - bound ] for , we have \le \frac{3}{2 } |{\cal s}|^{\frac{\theta}{1+\theta } } e^{- \frac{\theta}{1+\theta } h_{1+\theta}(p_{xz}|p_z)}.\end{aligned}\ ] ] from theorem [ theorem : large - deviation - bound ] and corollary [ corollary : large - deviation - bound ] , we have the following . [theorem : key - length - large - deviation - bound ] we have this section , we derive another bound from the leftover hash lemma ( lemma [ lemma : left - over ] ) .a basic idea is to use the smoothing in a similar manner as in the derivation of theorem [ theorem : large - deviation - bound ] .however , we do not use the large deviation bound .[ theorem : hybrid - bound - reference - system ] for any , we have + \log 4 \eta^2 - 1 .\label{eq : hybrid - bound}\end{aligned}\ ] ] note that the bound in theorem [ theorem : hybrid - bound - reference - system ] interpolates the lower bound in theorem [ theorem : spectrum ] and the bound in eq .( [ eq : large - deviation - bound-2 ] ) of theorem [ theorem : key - length - large - deviation - bound ] .more specifically , when the supremum in eq .( [ eq : hybrid - bound ] ) is achieved by , then the bound in eq .( [ eq : hybrid - bound ] ) reduces to the bound in theorem [ theorem : spectrum ] . to derive the bound in eq .( [ eq : large - deviation - bound-2 ] ) , we need some large deviation calculation . by using markov s inequality, we have thus , we have by setting , , and by substituting eq .( [ eq : lower - bound - on - spectrum ] ) into eq .( [ eq : hybrid - bound ] ) , we have the bound in eq .( [ eq : large - deviation - bound-2 ] ) . in , the optimal choice of was shown to be to derive the bound in eq .( [ eq : large - deviation - bound-1 ] ) , we need to use more complicated bounding . to derive a non - asymptoticbound that subsume the bound in eq .( [ eq : large - deviation - bound-1 ] ) , we need to introduce more artificial quantities instead of and .in this section , we consider the i.i.d . setting .we consider the case such that is obtained from throughout bsc , i.e. , in this case , since is the uniform distribution on , from eq .( [ eq : optimal - r ] ) , the optimal choice of is .we have where is the cumulative density function of the binomial trial .thus , the lower and upper bounds in theorem [ theorem : spectrum ] can be described as where for the distribution of the form in eq .( [ eq : bsc-2 ] ) , the bound in eqs .( [ eq : large - deviation - bound-1 ] ) and ( [ eq : large - deviation - bound-2 ] ) coincide .we have thus , the bounds in theorem [ theorem : key - length - large - deviation - bound ] can be described as where similarly , the bound in theorem [ theorem : hybrid - bound - reference - system ] can be described as where \nonumber \\ & & + \log 4 \eta^2 - 1 .\label{eq : h - lower}\end{aligned}\ ] ] for and , we plot , , , , and gaussian approximation derived by theorem [ theorem : gaussian - approximation ] in fig .[ fig : fixed - epsilon ] , where we set . from the figure, we can find that the exponential bound is better than the min - entropy bound up to about .the hybrid bound is better than both the exponential bound and the min - entropy bound .the gaussian approximation overestimate the lower bounds , but it is sandwiched by the lower bounds and the upper bound . and .the blue curve is the min - entropy bound .the black curve is the exponential bound .the red curve is the hybrid bound .the dashed pink curve is the gaussian approximation .the green curve is the upper bound . ] in figs .[ fig : fixed - n-1000 ] , [ fig : fixed - n-10000 ] and [ fig : fixed - n-100000 ] , the bounds are compared from a different perspective , i.e. , for fixed and varying . from the figures , we can find that the exponential bound and the hybrid bound become much better than the min - entropy bound as becomes small .when is rather large for , the min - entropy bound is better than the exponential bound .the hybrid bound is better than both the exponential bound and the min - entropy bound for whole ranges of . andthe blue curve is the min - entropy bound .the black curve is the exponential bound .the red curve is the hybrid bound .the green curve is the upper bound . ] and .the blue curve is the min - entropy bound .the black curve is the exponential bound .the red curve is the hybrid bound .the green curve is the upper bound . ] andthe blue curve is the min - entropy bound .the black curve is the exponential bound .the red curve is the hybrid bound .the green curve is the upper bound . ]in this paper , we have compared the exponential bound and the min - entropy bound .it turned out that the exponential bound is better than the min - entropy bound when is rather small for .when is rather large for , the min - entropy bound is better than the exponential bound .we also presented the hybrid bound that interpolates the exponential bound and the min - entropy bound .let be such that then , we define then , we have thus , we have . furthermore , by the construction of , we have for every .thus , we have let and be such that . for arbitrary fixed , let and then , we have .furthermore , we have thus , we have which implies r. renner , `` security of quantum key distribution , '' ph.d .dissertation , dipl .eth , switzerland , february 2005 , arxiv : quant - ph/0512258 , also available from international journal of quantum information , vol . 6 , no . 1 , pp . 1127 , february 2008 .r. renner and s. wolf , `` simple and tight bound for information reconciliation and privacy amplification , '' in _ advances in cryptology asiacrypt 2005 _ , ser .lecture notes in computer science , vol .3788.1em plus 0.5em minus 0.4emspringer - verlag , 2005 , pp . 199216 .m. hayashi , `` error exponent in asymmetric quantum hypothesis testing and its application to classical - quantum channel coding , '' _ phys .a _ , vol .76 , no . 6 , p. 062301, december 2007 , arxiv : quant - ph/0611013 .
|
this paper investigates the privacy amplification problem , and compares the existing two bounds : the exponential bound derived by one of the authors and the min - entropy bound derived by renner . it turns out that the exponential bound is better than the min - entropy bound when a security parameter is rather small for a block length , and that the min - entropy bound is better than the exponential bound when a security parameter is rather large for a block length . furthermore , we present another bound that interpolates the exponential bound and the min - entropy bound by a hybrid use of the rnyi entropy and the inf - spectral entropy .
|
let \times \cdots\times [ 0,l_d] ] we call a family of functions with , _ shift orthogonal basis functions ( sobfs ) _ if they form a complete orthonormal set of basis for space ) ] and the length of the shift in each coordinate is unit length ( i.e. and are replaced by and , respectively ) .because sobfs in 1d form an orthonormal set of basis , tensor product can be used to form orthonormal set of basis for 2d .that is , any function can be expressed by in what follows and and their primes ( i.e. , , etc . )take value from .indices , and their primes take value from .indices and and their primes take value from . also addition and subtraction for indices , and their primes are performed in module .similarly , addition and subtraction for indices , and their primes are performed in module .note that also , suppose because sobfs in 2d form an orthonormal set : we use a specific multi - index notation to refer to the entries of an infinite dimensional vector . to be rigorous, there is a ( non - unique ) one - to - one and onto mapping indeed , by , we mean ; however , to avoid cumbersome notation , we just use to refer to the positive integer instead . as it will be seen shortly , and related to depth index and shift index for the first coordinate , respectively . similarly , and are related to the depth index and shift index for the second coordinate , respectively .we also use multi - index notation to refer to the entries of a vector of length . again , to be rigorous , there is a ( non - unique ) one - to - one and onto mapping indeed , by , we mean ; however , to avoid cumbersome notation , we just use to refer to the elements of . in view of , function can be represented in sobfs basis using infinite dimensional vector also for any infinite dimensional vector , define for any infinite dimensional vector and all pairs , define transformation in the following way : note that from , in view of , for and : define set of shift orthogonal as follow : observe that is only a set and not a subspace .moreover , equation implies that is shift orthogonal if and only if ( i.e. see theorem [ theorem:3equivalence2d ] ) .define -transform to be operator defined by the intuition for -transform is that for every fixed and , if and are thought as matrices then where is the 2d discrete inverse fourier transform .the inverse of -transform , , is similar to the above , -transform can be written as where is the 2d discrete fourier transform .the importance of -transform appears in the following two theorems : for given function , the followings are equivalent : 1 . is shift orthogonal .2 . .3 . for all , [ theorem:3equivalence2d ] for given functions and ,the followings are equivalent : 1 . for all and , 2 . for all and , 3. for all and , [ theorem : shiftperpendicular2d ] in order to prove the above two theorems , we need to introduce some more notations and concepts .we use multi - index notation to define matrices in the following way : identify entries of matrix by , where the index to the left of `` '' determines the row number of the entry and the index to the right of `` '' determines the column number of the entry .let be the matrix defined by let to be infinite dimensional square matrix defined by finally , for any infinite dimensional vector , let be the matrix defined by note that and are unitary matrix ( i.e. although the latter is infinite dimensional and by unitary we mean that is infinite dimensional diagonal matrix whose diagonal entries are one ) . to see this ,observe that for example , another important property that and have is that if is multiplied on left by and on right by , then to see the above , note that where was used for the last equality .equality is very significant : it shows how can be turned into a `` pseudo - diagonal '' matrix using unitary matrices and .equation is used extensively , in the remainder of this section . finally , for any two infinite dimensional vectors and : for observe that now we are ready to prove theorem [ theorem:3equivalence2d ] and [ theorem : shiftperpendicular2d ] .* proof of theorem [ theorem:3equivalence2d ] : * the equivalence of 1 and 2 follows easily from ( i.e. with ) and the definition of .it remains to prove the equivalence between 2 and 3 : set .equation ( i.e. with ) implies that if and only if for all , , and ( i.e. matrix is the identity matrix ) .next let compute in two ways : on one hand , because is unitary on the other hand , yields that now if is the identity matrix ( i.e. ) , then from and being unitary , one concludes that is the identity matrix .hence , by , it must be the case that for all .conversely , if the above holds , then by , is the identity matrix .therefore , because is a unitary matrix , would be the identity matrix as well .equation , yields that hence , is the identity matrix , which implies .* proof of theorem [ theorem : shiftperpendicular2d ] : * the equivalence of 1 and 2 follows easily from and the fact that is equivalent to it remains to show that is equivalent to 3 .let and .equation , yields that is equivalent to for all , , and ( i.e. matrix is the zero matrix ) .next let compute in two ways : on one hand , because is unitary on the other hand , yields that now if is the zero matrix , then implies that is the zero matrix . hence , by , it must be the case that for all .conversely , if the above holds , then by , is the zero matrix .therefore , would also be the zero matrix .equation yields that hence , is the zero matrix , which implies . for any function ,let denote the projection of into the set of shift orthogonal functions ; that is , indeed using sobfs basis , where for any , observe that in the above two definitions the minimum arguments are not necessarily unique , and and are sets .define operator by where is a fixed infinite dimensional real vector with unit norm . for any , [ lemma : projection2d ] suppose and set note that , where equalities similar to were used for the second last inequality . now minimizing amounts to solving subproblems : for every and , solve the constraints in the above subproblems are due to and equivalence of 2 and 3 in theorem [ theorem:3equivalence2d ] .the solutions to the above subproblems are exactly that is , projection of into an infinite dimensional ball of radius 1 . in particular , set ( i.e. choose a fixed real valued vector in the second case above ) , in which case , would be an element of . [remark : realness ] theorem 2.2 in paper and definition of operator , yields that if infinite dimensional vector is real valued , then is also real valued .this is why in the case , we assign to operator the fixed infinite dimensional vector ; which is real valued and has unit norm . indeed , ( as it is apparent in the proof of lemma [ lemma : projection2d ] )had we defined operator to output all infinite dimensional ( complex ) vector of unit norm in the case , then would have been a set equal to ; however , some elements of would have been complex valued .for computational purposes , only a finite number of sobfs basis are used to represent a function . in this section , assume that for the first coordinate all sobfs basis whose depth index is smaller or equal to and for the second coordinate all sobfs basis whose depth index is smaller or equal to are used to denote functions in .that is , the analysis done in section [ section : theorysobf2d ] can be adapted for this situation by simple modification . in particular , index ( and ) takes value from instead of and index ( and ) takes value from instead of .the adapted definition for set with finite depth indices is as noted earlier , an important question that arises in optimization problems that involve shift orthogonality constraints is to find for a given function ; that is , find shift orthogonal function that minimizes .when functions are expressed in terms of tensor product of one dimensional sobfs basis ( i.e. with corresponding depth indices smaller or equal to and ) , then the question is equivalent to : given , solve result of lemma [ lemma : projection2d ] in section [ section : theorysobf2d ] implies that the solution to problem can be obtained using the procedure in algorithm [ algorithm : proceduren2d ] .all the results that were developed in section [ section : theorysobf2d ] can also be easily adapted for domains with other dimensions .for example suppose \times [ 0,l_2]\times [ 0,l_3] ] ( i.e. again using appropriate scaling , it is assumed that the length of the shift is 1 ) and is the following : this sections describes the computational complexity of algorithm [ algorithm : proceduren2d ] and highlights some of the important properties of this algorithm .the results stay the same for domains with dimensions other than .let be the size of input vector ; which indicates the number of coefficients used to represent the given function .algorithm [ algorithm : proceduren2d ] consists of three `` for '' loops .each iteration in the first and the last for " loop can be computed using operations via inverse fast fourier transform and fast fourier transform , respectively .each iteration in the second loop " can be done using .therefore , algorithm [ algorithm : proceduren2d ] can be performed using operations , which leads to computational complexity of furthermore , note that each of the for " loops in algorithms [ algorithm : proceduren2d ] can be done in parallel .this enhances the speed of the algorithm even further and makes it suitable for inputs with large dimensions .another nice property of algorithm [ algorithm : proceduren2d ] is that for real valued input vector , it outputs a real valued vector ( i.e. recall remark [ remark : realness ] ) .the importance of this property is that if function and sobfs are real valued then would be a real valued vector , and therefore using algorithm [ algorithm : proceduren2d ] , the projected would also be real valued .this section provides an example of real valued sobfs with certain nice properties called shift orthogonal plane waves ( sopws ) . as it will be seenshortly , sopws are suitable for numerical computation because there exist an exact prescription of them in terms of fourier basis .therefore , functions can be expanded in terms of sopws very efficiently using fft and its inverse .consider 1d domain k k ] , in 2d with periodic boundary conditions ; the algorithms below can be straightforwardly extended to other dimensions .we use sopws given by equations and as the sobfs used in section [ section : projectionalgorithm ] .in particular , we expand the given function in terms of sopws : operators and are defined in the similar way to definitions and : and other examples of sobfs could also be used . for this application ,sopws were chosen mainly due to the efficiency in calculating the result of operators and from fourier coefficient ( i.e. recall from section [ section : definitionsopw ] that fft and its inverse provide an efficient procedure to switch between representation of a function in fourier basis s and its representation in sopws s ) . by introducing an auxiliary variable ,the constrained optimization problem is equivalent to the following problem : which can be solved by an algorithm based on the bregman iteration ( i.e. see ) .similarly , is obtained by solving the optimization problem efficiently .suppose that the first levels are already constructed and let .in this case , the goal is to find satisfying define using the sopws basis , the above problem is equivalent to solving the following problem in sopws frequency space : vector is given and the objective is to find vector closest to that is shift orthogonal and perpendicular to to .theorems [ theorem:3equivalence2d ] and [ theorem : shiftperpendicular2d ] yield that in order to solve problem , for each and one needs to find vector that is closest to , perpendicular to for , and lives on the unit sphere .moreover note that , again by theorems [ theorem:3equivalence2d ] and [ theorem : shiftperpendicular2d ] , form an orthonormal set of vectors for each and , because elements of are constructed such that they are shift orthogonal and orthogonal to shift span of each other .hence , can be computed in two steps : figure [ fig : bcpws ] plots the first four bcpws in 1d using the proposed algorithms .these results are very consistent with the results in .table [ tab : efficiencycomparisons ] , highlights the computational speed gained by using the new procedure outlined in this section .the proof of theorems [ theorem : firstderivative ] and [ theorem : secondderivative ] are very similar .the idea of the proof is simple : write sopws basis in terms of fourier basis using formulas and , take appropriate number of derivatives , and then use formulas and to write back the result in terms of the sopws basis .* proof of theorem [ theorem : firstderivative ] : * first observe that because , it suffices to find and then by shifting , the corresponding formulas for sopws with other shift indices follow easily .now using formulas and , now from equations and using lemma [ lemma : sumomega ] , where on the other hand , equation implies that for , therefore , again , equation implies that for , therefore , substituting , and into equation yields that .\ ] ] this completest the proof . *proof of theorem [ theorem : secondderivative ] : * again observe that because , it suffices to find and then by shifting , the corresponding formulas for sopws with other shift indices follow easily .now using formulas and , now from equations and using lemma [ lemma : sumomega ] , where /3 \qquad & \hbox{if } j=0 , \\ ( -1)^{k}(4k-2)\csc^2(\pi j / l)-(-1)^k(2k-1)l & \hbox{if is odd } , \\2\csc^2(\pi j / l)-(k^2+(k-1)^2)l & \hbox{otherwise}. \end{cases}\ ] ] on the other hand , from , also from , substituting , and into equation and simplifying yields that this completest the proof .v. ozolin , r. lai , r. caflisch , s. osher , _ compressed plane waves yield a compactly supported multiresolution basis for the laplace operator _ , proceedings of the national academy of sciences , 111 ( 2014 ) , pp . 16911696 .e. laeng , _ une base orthonormale de do nt les lments sont bien localiss dans lespace de phase et leurs supports adapts toute partition symtrique de lespace des frquences _ , c. r. acad .paris , 311 ( 1990 ) , pp .
|
this paper presents a fast algorithm for projecting a given function to the set of shift orthogonal functions ( i.e. set containing functions with unit norm that are orthogonal to their prescribed shifts ) . the algorithm can be parallelized easily and its computational complexity is bounded by , where is the number of coefficients used for storing the input . to derive the algorithm , a particular class of basis called shift orthogonal basis functions are introduced and some theory regarding them is developed .
|
since epidemic models were first introduced by kermack and mckendrick in , the study on mathematical models has been flourished .much attention has been devoted to analyzing , predicting the spread , and designing controls of infectious diseases in host populations ; see and the references therein .one of classic epidemic models is the sir ( susceptible - infected - removed ) model that is suitable for modeling some diseases with permanent immunity such as rubella , whooping cough , measles , smallpox , etc . in the sir model ,a homogeneous host population is subdivided into three epidemiologically distinct types of individuals : * ( s ) : the susceptible class , the class of those individuals who are capable of contracting the disease and becoming infective , * ( i ) : the infective class , the class of those individuals who are capable of transmitting the disease to others , * ( r ) : the removed class , the class of infected individuals who are dead , or have recovered , and are permanently immune , or are isolated .if we denote by the number of individuals at time in classes ( s ) , ( i ) , and ( r ) , respectively , the spread of infection can be formulated by the following deterministic system of differential equations : where is the per capita birth rate of the population , is the per capita disease - free death rate and is the excess per capita death rate of infective class , is the effective per capita contact rate , and is per capita recovery rate of the infective individuals . on the other hand, it is well recognized that the population is always subject to random disturbances and it is desirable to learn how randomness effects the models .thus , it is important to investigate stochastic epidemic models .jiang et al . investigated the asymptotic behavior of global positive solution for the non - degenerate stochastic sir model where , , and are mutually independent brownian motions , are the intensities of the white noises . however , in reality , the classes ( s ) , ( i ) , and ( r ) are usually subject to the same random factors such as temperature , humidity , pollution and other extrinsic influences . as a result , it is more plausible to assume that the random noise perturbing the three classes is correlated .if we assume that the brownian motions , , and are the same , we obtain the following model which has been considered in . compared to , is more difficult to deal with due to the degeneracy of the diffusion .one of the important questions is concerned with whether the transition to a disease free state or the disease state will survive permanently . for the deterministic model ,the asymptotic behavior has been classified completely as follows .if , then the population tends to the disease - free equilibrium while the population approaches an endemic equilibrium in case . in , similar results are given for a general epidemic model with reaction - diffusion in terms of basic reproduction numbers .in , the authors attempted to answer the aforementioned question for in case and . by using lyapunov - type functions, they provided some sufficient conditions for extinction or permanence as well as ergordicity for the solution of system . using the same methods , the extinction and permanence in some different stochastic sir modelshave been studied in etc . in practice , because of the randomness and the degeneracy of the diffusion , the model is much more difficult to deal with compared in contrast to the deterministic counter part . moreover , although one may assume the existence of appropriate lyapunov function , it is fairly difficult to find an effective lyapunov function in practice .in other words , there has been no decisive classification for stochastic sir models that is similar to the deterministic case .our main goal in this paper is to provide such a classification .we shall derive a sufficient and almost necessary condition for permanence ( as well as ergodicity ) and extinction of the disease for the stochastic sir model by using a value , which is similar to in the deterministic model . note that such kind of results are obtained for a stochastic sis model in .however , the model studied there can be reduced to one - dimensional equation that is much easier to investigate .the method used in can not treat the stochastic sir model .estimation for the convergence rate is also not given in .a more general method therefore need to be introduced .the new method can remove most assumptions in as well as can treat the case , which has not been taken into consideration in . note that the case and indicates the random factors have opposite effects to healthy individuals and infected ones .for instance , patients with tuberculosis or some other pulmonary disease do not endure well in cool and humid weather while healthy people may be fine in such kind of weather .in addition , individuals with a disease , usually have weaker resistance to some other kinds of disease .our new method is also suitable to deal with other stochastic variants of such as models introduced in , etc .the rest of the paper is arranged as follows .section [ sec : thr ] derives a threshold that is used to classify the extinction and permanence of the disease . to establish the desired result , by considering the dynamics on the boundary , we obtain a threshold that enables us to determine the asymptotic behavior of the solution .in particular , it is shown that if the disease will decay in an exponential rate . in case , the solution converges to a stationary distribution in total variation .it means that the disease is permanent .the rate of convergence is proved to be bounded above by any polynomial decay .the ergodicity of the solution process is also proved .finally , section [ sec : ex ] is devoted to some discussion and comparison to existing results in the literature .some numerical examples are provided to illustrate our results .let be a complete probability space with the filtration satisfying the usual condition , i.e. , it is increasing and right continuous while contains all -null sets .let be an -adapted , brownian motions . because the dynamics of class of recover has no effect on the disease transmission dynamics, we only consider the following system : +\sigma_1 s(t)db(t),\\ di(t)=[\beta s(t)i(t)-(\mu+\rho+\gamma)i(t)]dt+\sigma_2 i(t)db(t ) .\end{cases}\ ] ] assume that . by the symmetry of brownian motions , without loss of generality , we suppose throughout this paper that using standard arguments , it can be easily shown that for any positive initial value , there exists uniquely a global solution that remains in with probability 1 ( see e.g. , ) . to obtain further properties of the solution, we first consider the equation on the boundary , let be the solution to with initial value .it follows from the comparison theorem ( * ? ? ?* theorem 1.1 , p.437 ) that a.s . by solving the fokker - planck equation, the process has a unique stationary distribution with density where and is the gamma function . by the strong law of large numberwe deduce that to proceed , we define the threshold as follows : [ thm2.1 ] if , then for any initial value we have a.s .and the distribution of converges weakly to the unique invariant probability measure with the density .let be the solution to where is the solution to . by comparison theorem , a.s .given that .in view of the it formula and the ergodicity of , that is , converges almost surely to 0 at an exponential rate . for any , it follows from that there exists such that where clearly , we can choose satisfying .let be the solution to given that .we have from the comparison theorem that . in view of the it formula , for almost all have \tau+\beta\int_{t_0}^ti_{u , v}(\tau)d\tau.\\ \leq&\beta\int_{t_0}^t\exp\big\{\dfrac{\lambda\tau}2\big\}d\tau= -\dfrac{2\beta}{\lambda}\big(\exp\big\{\dfrac{\lambda t_0}2\big\}-\exp\big\{\dfrac{\lambda t}2\big\}\big)<{\varepsilon}. \end{aligned}\ ] ] as a result , let be the distribution of a random variable provided that admits as its distribution . in lieu of proving that the distribution of converges weakly to , we claim an equivalent statement that the distribution of converges weakly to . by the portmanteau theorem ( see ( * ? ? ?* theorem 1 ) ) , it is sufficient to prove that for any satisfying and , we have since the diffusion given by is non - degenerate , it is well known that the distribution of weakly converges to as ( see e.g. , ) .thus note that applying and to yields since is taken arbitrarily , we obtain the desired conclusion .the proof is complete .we now focus on the case .let be the transition probability of . to obtain properties of , we first rewrite equation in stratonovich s form +\sigma_1s(t)\circ db(t),\\ di(t)=[-c_2i(t)+\beta s(t)i(t)]dt+\sigma_2 i(t)\circ db(t ) .\end{cases}\ ] ] where put to proceed , we first recall the notion of lie bracket . if and are vector fields on then the lie bracket ] spans at every . as a result , the transition probability has smooth density .this lemma has been proved in for the case .assume that .it is easy to obtain (x , y)=\begin{pmatrix } \alpha -r\beta xy\\ -\beta xy \end{pmatrix } , \\e:=&[c , d](x , y)=\begin{pmatrix } -\alpha + r^2\beta xy\\ -\beta xy \end{pmatrix } , \\ f:=&[c , e](x , y)=\begin{pmatrix } \alpha - r^3\beta xy\\ -\beta xy \end{pmatrix}.\end{aligned}\ ] ] since only if or ( since ) . when , solving obtains which implies as a result , span for all .the lemma is proved . in order to describe the support of the invariant measure ( if it exists ) and to prove the ergodicity of, we need to investigate the following control system on where is taken from the set of piecewise continuous real - valued functions defined on .let be the solution to equation with control and initial value .denote by the reachable set from , that is the set of such that there exists a and a control satisfying .we now recall some concepts introduced in .let be a subset of satisfying the property that for any , .then there is a unique maximal set such that this property still holds for .such a is called a control set .a control set is said to be invariant if for all .putting and , we have an equivalent system where and .\end{aligned}\ ] ] [ lem2.3 ] for the control system , the following claims hold 1 . for any and , there exists a control and such that , .2 . for any , there is a , a control , and such that and that .3 . let 1 .if then for any , there is , a control , and such that and that .2 . suppose that and .if , there is and a control and such that and that .however , there is no control and such that .suppose that and let we choose with .it is easy to check that with this control , there is such that , . if , we can construct similarly .then the claim 1 is proved . by choosing to be sufficiently large, there is a such that .this property , combining with , implies the existence of a feedback control and satisfying that and that .we now prove claim 3 . if then =-\infty\ ] ] and =0 \ \hbox { if } \r>1.\ ] ] as a result , if ] , which implies that there is a feedback control and satisfying and . if ] which implies the desired claim . consider the remaining case when ] we have where therefore , -\leq [ \ln v]_-+c_2t+\sigma_2 |b(t)|.\ ] ] this implies that ^ 2_-{\boldsymbol{1}}_a\leq & [ \ln v]^2_-{\boldsymbol{1}}_a+\big(c^2_2t^2+\sigma^2_2 b^2(t)\big){\boldsymbol{1}}_a+2c_2t[\ln v]_-{\boldsymbol{1}}_a \\&+2\sigma_2|b(t)|{\boldsymbol{1}}_a[\ln v]_-+2c_1t\sigma_2|b(t)|{\boldsymbol{1}}_a .\end{aligned}\ ] ] by using hlder inequality , we obtain taking expectation both sides and using the estimate above , we have ^ 2_-{\boldsymbol{1}}_a\leq[\ln v]_-^2{\mathbb{p}}(a)+k_3t\sqrt{{\mathbb{p}}(a)}[\ln v]_-+k_4t^2\sqrt{{\mathbb{p}}(a)},\ ] ] for some positive constants we now , choose satisfying choose so large that [ lem2.5 ] for and chosen as above , there is and such that \}\geq 1-{\varepsilon}\ ] ] for all , v\in(0,\delta]. ] the proof is complete .[ prop2.3 ] assume .let and be as in lemma [ lem2.5 ] .there exists independent of such that -^2\leq [ \ln v]_-^2-\lambda t[\ln v]_-+k_5 t^2\ ] ] for any . ] we have where \big\}.\ ] ] in we have hence , -\leq [ \ln v]_--\frac{3\lambdat}4\;\;\forall\ , t\in[t^ * , 2t^*].\ ] ] as a result -^2\leq [ \ln v]_-^2-\frac{3\lambda t}2[\ln v]_-+\frac{9\lambda^2 t^2}{16}\;\;\forall\ , t\in[t^ * , 2t^*].\ ] ] which implies that ^ 2_-\right]\leq{\mathbb{p}}(\omega_{u , v})[\ln v]_-^2-\frac{3\lambda t}{2}{\mathbb{p}}(\omega_{u , v})[\ln v]_-+\frac{9\lambda^2 t^2}{16}{\mathbb{p}}(\omega_{u , v}).\ ] ] in we have from lemma [ lem2.4 ] that ^ 2_-\right]\leq{\mathbb{p}}(\omega^c_{u , v})[\ln v]_-^2+k_3 t \sqrt{{\mathbb{p}}(\omega^c_{u , v})}[\ln v]_-+k_4t^2 \sqrt{{\mathbb{p}}(\omega^c_{u , v})}.\ ] ] adding and side by side , we obtain ^ 2_-\leq[\ln v]_-^2+\big(-\frac{3\lambda}2(1-{\varepsilon})+k_3\sqrt{{\varepsilon}}\big)t[\ln v]_-+\big(\frac{9\lambda^2}{16}+k_4\big)t^2 .\ ] ] in view of we deduce ^ 2_-\leq[\ln v]_-^2-\lambda t[\ln v]_-+\big(\frac{9\lambda^2}{16}+k_4\big)t^2 .\ ] ] now , for and we have from lemma [ lem2.4 ] that ^ 2_-&\leq[\ln v]_-^2+k_3 t[\ln v]_-+k_4t^2 \\&\leq |\ln\delta|^2+k_3 t|\ln\delta|+k_4t^2 .\end{aligned}\ ] ] letting sufficiently large such that and ] . in view of propositions [ prop2.3 ] , [ prop2.4 ] and , there is a compact set , satisfying applying and lemma [ petite ] to ( * ? ? ?* theorem 3.6 ) we obtain that for some invariant probability measure of the markov chain .it is shown in the proof of ( * ? ? ?* theorem 3.6 ) that implies where . in view of (* theorem 4.1 ) , the markov process has an invariant probability measure . as a result , is also an invariant probability measure of the markov chain . in light of, we must have , that is , is an invariant measure of the markov process . in the proofs ,we use the function -^2 ] for any small in the same manner .we can show that there is , a compact set satisfying ^{\frac1{1+q}}+h_q{\boldsymbol{1}}_{\{(u , v)\in k_p\}}\,\forall ( u , v)\in{\mathbb{r}}^{2,\circ}_+,\ ] ] where -^{1+q}.$ ] then applying ( * ? ? ?* theorem 3.6 ) , we can obtain since is decreasing in , we easily deduce where . on the one hand , in view of lemma [ lem2.3 ] ,the invariant control set of , says , is if and if . by ( * ? ? ?* lemma 4.1 ) , is exactly the support of the unique invariant measure .the strong law of large number can be obtained by using ( * ? ? ?* theorem 8.1 ) or .we have shown that the extinction and permanence of the disease in a stochastic sir model can be determined by the sign of a threshold value .only the critical case is not studied in this paper . to illustrate the significance of our results ,let us compare our results with those in .[ thm3.1bs ] assume that .let be a solution of system .if and then there exists a stationary distribution for the markov process which is the limit in total variation of transition probability . here by straightforward calculations or by arguments in section 4 of we can show that their conditions are much more restrictive than the condition .moreover , it should be noted that theorem [ thm2.1 ] is the same as lemma 3.5 in .in contrast to the aforementioned paper , we provide a rigorous proof of theorem [ thm2.1 ] here . moreover , the conclusions in theorems [ thm2.1 ] and [ thm2.2 ] still hold for the non - degenerate model . as a resultwe have the following theorem for model .[ thm3.2 ] let be the solution to with initial value .define as . if , then a.s . andthe distribution of converges weakly to , which has the density .if , the solution process has a unique invariant probability measure whose support is .moreover , the transition probability of converges to in total variation .the rate of convergence is bounded above by any polynomial rate . moreover , for any -integrable function , we have [ ex1 ] consider with parameters , , , , , , and direct calculation shows that , , and by virtue of theorem [ thm2.2 ] , has a unique invariant probability measure whose support is .consequently , the strong law of large numbers and the convergence in total variation norm of the transition probability hold .a sample path of solution to is illustrated by figures [ f1.1s ] , while the phase portrait in figure [ f1.1ss ] demonstrates that the support of lies above and includes the curve as well as the empirical density of . in non - degenerate casewith this same set of parameters the empirical density of is illustrated by figure [ f1.1sss ] .of the support of and the empirical density of in example [ ex1].,title="fig:",width=283,height=207 ] of the support of and the empirical density of in example [ ex1].,title="fig:",width=283,height=207 ] [ ex2 ] consider with parameters , , , , , , and for these parameters , the conditions in theorem [ thm3.1bs ] are not satisfied .we obtain as a result of theorem [ thm2.2 ] , has a unique invariant probability measure whose support is .consequently , the strong law of large numbers and the convergence in total variation norm of the transition probability hold .a sample path of solution to is depicted in figures [ f1.2 ] , while the phase portrait in figure [ f1.22 ] demonstrates that the support of and the empirical density of .[ ex3 ] consider with parameters , , , , , and it can be shown that as a result of theorem [ thm2.1 ] , a.s . as .this claim is supported by figures [ f4.1 ] .that is , the population will eventually have no disease .the distribution of convergence to as the graphs of and empirical density of at are illustrated by figure [ f4.2 ] .f. ball , d. sirl , an sir epidemic model on a population with random network and household structure , and several types of individuals , _ adv . in appl .probab . , _ 44 ( 2012 ) ,no . 1 , 63 - 86 . m. barczy , g. pap , portmanteau theorem for unbounded measures , _ statist .lett_. , 76(2006 ) , 1831 - 1835 .f. brauer , c. c. chavez , _ mathematical models in population biology and epidemiology _ , springer - verlag new york , 2012. y. cai , y. kang , m. banerjee , w. wang , a stochastic sirs epidemic model with infectious force under intervention strategies ._ j. differential equations _ 259 ( 2015 ) , no . 12 , 7463 - 7502 .nguyen , g. yin , conditions for permanence and ergodicity of certain stochastic predator - prey models , to appear in _ j. appl_ m. gathy , c. lefevre , claude from damage models to sir epidemics and cascading failures , _ adv . in appl .probab . , _ 41 ( 2009 ) ,1 , 247 - 269 .k. ichihara , h. kunita , a classification of the second order degenerate elliptic operators and its probabilistic characterization , _ z. wahrsch .gebiete , _ 30 ( 1974 ) , 235 - 254 .corrections 39(1977 ) , 81 - 84 .n. ikeda , s. watanabe , _stochastic differential equations and diffusion processes , _ second edition , north - holland publishing co. , amsterdam , ( 1989 ) .s. f. jarner , g. o. roberts , polynomial convergence rates of markov chains , _ ann .prob . , _ 12 ( 2002 ) , 224 - 247 .jiang , j.j .shi , asymptotic behavior of global positive solution to a stochastic sir model , _ math .modell . , _54(2011 ) , 221 - 232 . c.y .jiang , n.z .shi , the behavior of an sir epidemic model with stochastic perturbation , _ stochastic anal ., _ 30 ( 2012 ) , 755 - 773 .knipl , g. rost , j. wu , epidemic spread and variation of peak times in connected regions due to travel - related infections - dynamics of an antigravity - type delay differential model ._ siam j. appl ._ , 12 ( 4 ) ( 2013 ) , 1722 - 1762 .i. kortchemski , a predator - prey sir type dynamics on large complete graphs with three phase transitions , _ stochastic process . appl . , _ 125 ( 2015 ) ,3 , 886 - 917 .y. lin , d. jiang , p. xia , long - time behavior of a stochastic sir model , _ appl .comput . , _ 236 ( 2014 ) , 1 - 9 .x. mao , _ stochastic differential equations and their applications _ , horwood publishing chichester , 1997 .s. p. meyn , r. l. tweedie , _ markov chains and stochastic stability , _ springer , london , 1993 .s. p. meyn , r. l. tweedie , stability of markovian processes ii : continuous - time processes and sampled chains ._ adv . in appl ._ , ( 1993 ) , 487 - 517 .nguyen , g. yin , coexistence and exclusion of stochastic competitive lotka - volterra models , submitted .e. nummelin , _ general irreducible markov chains and non - negative operations _ , cambridge press , ( 1984 ). f. selley , a. besenyei , i.z .kiss , p.l .simon , dynamic control of modern , network - based epidemic models ._ siam j. appl .syst . , _ 14 ( 2015 ) ,no . 1 , 168 - 187 .w. wang , x. q. zhao , basic reproduction numbers for reaction - diffusion epidemic models ._ siam j. appl .syst . , _ 11 ( 2012 ) , no . 4 , 1652 - 1673. y. zhou , w. zhang , s. yuan , survival and stationary distribution of a sir epidemic model with stochastic perturbations , _ appl .comput . , _ 244 ( 1 ) ( 2014 ) , 118 - 131 .
|
this paper investigates asymptotic behavior of a stochastic sir epidemic model , which is a system with degenerate diffusion . it gives sufficient conditions that are very close to the necessary conditions for the permanence . in addition , this paper develops ergodicity of the underlying system . it is proved that the transition probabilities converge in total variation norm to the invariant measure . our result gives a precise characterization of the support of the invariant measure . rates of convergence are also ascertained . it is shown that the rate is not too far from exponential in that the convergence speed is of the form of a polynomial of any degree . * keywords . * sir model ; extinction ; permanence ; stationary distribution ; ergodicity . * subject classification . * 34c12 , 60h10 , 92d25 . * running title . * classification in a stochastic sir model
|
time - delayed feedback control has been used as a method of stabilizing unstable periodic orbits ( upos ) or spatially extended patterns by a number of authors .the method of pyragus , sometimes called ` time - delayed autosynchronization ' ( tdas ) , has attracted much attention . here , the feedback is proportional to the difference between the current and a past state of the system .that is , where is some state vector , is the period of the targeted upo and is a feedback gain matrix .advantages of this method include the following .first , since the feedback vanishes on any orbit with period , the targeted upo is still a solution of the system with feedback .control is therefore achieved in a non - invasive manner .second , the only information required a priori is the period of the target upo , rather than a detailed knowledge of the profile of the orbit , or even any knowledge of the form of the original odes , which may be useful in experimental setups .the method has been implemented successfully in a variety of laboratory experiments on electronic , laser , plasma , and chemical systems , as well as in pattern - forming systems ; more examples can be found in a recent review by pyragus .a paper of nakajima gave a supposed restriction on the method of pyragus .it was believed that if a upo in a system with no feedback had an odd number of real floquet multipliers greater than unity , then there was no choice of the feedback gain matrix for which the method of pyragus could be used to stabilize the upo . however , a recent paper of fiedler _ et al . _ gives a counterexample to this restriction .they add pyragus - type feedback to the normal form of a subcritical hopf bifurcation and show that the subcritical periodic orbit can be stabilized for some values of the feedback gain matrix .the hopf normal form is two - dimensional , so the subcritical orbit has exactly one unstable floquet multiplier .the mechanism for stabilizing the orbit is through a transcritical bifurcation with a stable delay - induced periodic orbit .just _ et al . _ investigate a series of bifurcations in this system , which has the attractive feature that , despite the presence of the delay terms , much of the analysis can be carried out analytically .the subcritical hopf bifurcation of a stable equilibrium is a generic mechanism for creating upos with an odd number of unstable floquet multipliers .such bifurcations occur in a number of physical systems , such as the belousov zhabotinsky reaction - diffusion equation , the hodgkin huxley model of action potentials in neurons , and in nmr lasers .the reduction of these higher - dimensional dynamical systems to the two - dimensional normal form of the hopf bifurcation problem is a standard procedure .moreover , if pyragus - type feedback delay terms were added to the model odes , then these ( infinite - dimensional ) dynamical systems could likewise be reduced to the standard two - dimensional normal form in a vicinity of a hopf bifurcation , with the parameters of the feedback control matrix modifying the coefficients in the normal form . despite this disconnect between center manifold reduction of delay equations to hopf normal form , and the example of fiedler _ et al . _in which the feedback delay terms are added directly to the hopf normal form , we find that the same stabilization mechanism of subcritical hopf orbits applies to both their example and to the one we present for the lorenz equations .specifically , we study a subcritical hopf bifurcation of a stable equilibrium in the lorenz equations , and show that pyragus - type feedback can stabilize the subcritical periodic orbit . as in the example in , in the absence of feedback, the bifurcating periodic orbit has exactly one real unstable floquet multiplier .it also has one stable floquet multiplier , and one floquet multiplier equal to one ( corresponding to the neutral direction along the orbit ) .the gain matrix multiplying the pyragus feedback terms can be chosen in many different ways .we give two examples in which we choose the structure of the gain matrix in different ways and show that they give quite different results . in our first example, we choose the gain matrix in a manner suggested by the results in and : there is no feedback in the stable direction of the upo , and pyragus - type feedback in the direction of the unstable floquet multiplier , which is identical in form to the feedback in . in this way, the problem of choosing the nine parameters in the gain matrix is reduced to one of making an informed choice of the two parameters employed in .we find that the subcritical orbit can be stabilized over a wide range of values of our two bifurcation parameters : the amplitude of the feedback gain , and the usual control parameter in the lorenz equations .we identify a codimension - two point in the hopf normal form example , where two hopf bifurcations collide , and show that the same codimension - two point can be found in the lorenz system with this choice of feedback , and the bifurcation structure is qualitatively the same in the two cases .this codimension - two point captures the stabilization mechanism in both examples : the periodic orbits created by the two hopf bifurcations exchange stability in a transcritical bifurcation .the curve of transcritical bifurcations in our two - parameter plane emanates from the hopf - hopf codimension - two point .our second choice of the gain matrix is a real multiple of the identity .this is also a natural choice , but in contrast with our first example we show that here the subcritical orbit can not be stabilized for any parameters close to the original hopf bifurcation .our two bifurcation parameters are again the amplitude of the gain and the parameter in the lorenz equations .we give analytical results on the location of hopf bifurcation curves and hence deduce the stability of the periodic orbit as it bifurcates .this paper is organized as follows . in section[ sec : fied ] we review some results from fiedler _ et al . _ and just _ et al . _ . in section [ sec : lor ] we give our example system of the lorenz equations with pyragus feedback .we give two examples of the choice of gain matrix .we explain for the first example how we choose the gain matrix to stabilize the subcritical hopf orbit , and show that the bifurcation structure of this system is the same as that for the normal form system . for the second example , the gain matrix is a real multiple of the identity and we show that the subcritical orbit can not be stabilized .section [ sec : conc ] concludes .in this section we recap the results of and identify a particular codimension - two point in the hopf normal form with delay which we will later examine for the lorenz equations with feedback .this codimension - two point acts as an organizing center for the bifurcations involved in the mechanism for stabilizing the periodic orbit. the normal form of a subcritical hopf bifurcation with a pyragus - type delay term is : with , and parameters .the feedback gain , and the delay .the linear hopf frequency has been normalized to unity by an appropriate scaling of time .we consider as the primary bifurcation parameter .we consider only ; this is the case in the lorenz example . for the system with no feedback ( i.e. ) we can write and then periodic orbits exist with amplitude if , so and the orbits have minimal period .we refer to these orbits as the _ pyragus orbits _ , and it is these orbits that we wish to stabilize non - invasively by adding an appropriate feedback term ( i.e. with ) . following and , we define the _ pyragus curve _ in - space , along which the feedback vanishes on the pyragus orbits : we plot this curve in - space in figure [ fig : thtp ] , along with curves of hopf bifurcations from the zero solution .in later sections , we set , as our main purpose is the non - invasive stabilization of the pyragus orbits .the zero solution of undergoes hopf bifurcations when the characteristic equation has purely imaginary solutions .setting in and linearizing we find : writing and separating into real and imaginary parts gives , \label{eq : hf1 } \\\omega -1&=b_0[\sin(\beta-\omega\tau)-\sin\beta ] .\label{eq : hf2}\end{aligned}\ ] ] these equations define the hopf curves , in - space , parameterized by the linear frequency associated with the bifurcating periodic orbit .there are multiple branches to this curve , which we show in figure [ fig : thtp](a ) , but we concentrate on the one which intersects the curve at . the solution of the characteristic equation at , has and corresponds to the hopf bifurcation to the pyragus orbit .figure [ fig : thtp ] shows the possible configurations of the curves and as the parameter is varied .the curves typically cross in two places : at , and at a second location depending on . at ,the two curves are tangent at and only intersect once .just _ et al . _ show that for simplicity , we assume , so we must have .we define a curve of hopf bifurcations in - space by the location of the second intersection of and .this is a hopf bifurcation to a _ delay - induced periodic orbit _ , that is , a periodic orbit arising from the addition of the delay terms ; one for which the feedback does not vanish .we adopt the convention that a hopf bifurcation of a stable equilibrium is called ` supercritical ' ( ` subcritical ' ) if the resulting periodic orbit bifurcates into the parameter regime where it coexists with the unstable ( stable ) equilibrium .such a supercritical bifurcation generically produces a stable periodic orbit , while the subcritical case produces an unstable periodic orbit . in the absence of feedback ,the bifurcating orbit is subcritical and unstable .the mechanism for stabilization involves the additional delay - induced hopf bifurcation at .this bifurcation can change the trivial equilibrium from being stable to being unstable .consequently , the pyragus orbit may then co - exist with an unstable periodic orbit . as the hopf bifurcation at passes through , the original hopf bifurcation to the pyragus orbit at changes from a subcritical one to a supercritical one .this is all done without otherwise altering the form of the pyragus orbit .the hopf bifurcation of the zero solution to the pyragus orbit at changes from subcritical to supercritical as described above as is increased through , since for , the curve lies ` inside ' . in this sense , is the smallest value of the feedback gain for which the pyragus orbit is stabilized immediately after the bifurcation point .the minimum positive can be selected by choosing such that .we now review some of the details of the bifurcation structure of the system which are described in just _ et al . _ , and identify the codimension - two point we examine in the lorenz system .we consider and as two bifurcation parameters , and fix .the mechanism by which the pyragus orbit is stabilized is through a transcritical bifurcation with a delay - induced periodic orbit . as shown in just_ et al . _ , the transcritical bifurcations occur when or , in - space , since , when this line of transcritical bifurcations collides in - space with the two curves of hopf bifurcations and at , at a double - hopf codimension - two point . in figure[ fig : bifs ] we sketch the bifurcation structure around this point in - space . from this figurewe can see that in order for the pyragus orbit to bifurcate stably ( i.e. supercritically ) at , we must have .we now use the lorenz equations as an example system to demonstrate that the feedback described above can also stabilize orbits arising in a subcritical hopf bifurcation in a higher - dimensional system of differential equations .we give two examples of a choice of feedback gain matrix .the first choice is informed by the results given above , and for the second choice we set the gain matrix equal to a real multiple of the identity . in the first example, we further locate the codimension - two point described in section [ sec : cod2 ] , in the lorenz system with feedback , and show that the bifurcation structure is the same as in the normal form case .the lorenz equations are most often written in the following form : for real parameters , and .lorenz and most other authors studied the parameter regime , , , and we continue in the same manner .taking as the primary bifurcation parameter , the zero solution is stable for and loses stability in a supercritical pitchfork bifurcation at .two further equilibria are created at as is increased further , these equilibria each undergo a subcritical hopf bifurcation at ( see and for further details ) .it is this bifurcation that we study in the following , so we shift coordinates to be centered around and rescale to obtain : figure [ fig : sub ] shows a bifurcation diagram of the subcritical bifurcation of the zero solution of , and also the period of the bifurcating orbits , which we use to determine the delay time in the controlled system .the periodic orbit exists for ; at the lower boundary it collides with a fixed point in a homoclinic bifurcation .we now add pyragus - type feedback to the lorenz equations .we write where , etc . and is a real feedback gain matrix , to be determined . in this sectionwe use the results of and to inform our choice of the control matrix . in general , would contain nine independent parameters , but the method we describe reduces this to only two . in section [ sec : id_feed ]we describe the dynamics when is a real multiple of the identity .note that if this system were reduced to normal form around the hopf bifurcation point , the resulting equations would not be the same as .that is , there would be no delay terms ; the delay terms here would only have the affect of altering the parameters in the usual hopf normal form .see for more details . at the bifurcation point ( ) , has one real negative eigenvalue ( which we denote by ) , and a pair of purely imaginary eigenvalues ( , ) .the center manifold of the original problem with no feedback is therefore two - dimensional , and the eigenvectors of can be found explicitly ( see ) .close to the bifurcation point , the subcritical orbit will lie in a two - dimensional manifold which is close to the center subspace at the bifurcation point .we therefore choose where is the matrix of eigenvectors which puts in jordan normal form , that is and there is then no feedback in the stable direction , and pyragus - type feedback in the directions tangent to the center manifold . in the following numerical results , we set , as in fiedler _ et al_. , and vary .we use the continuation package dde - biftool to analyze the delay - differential equation .the primary bifurcation parameter is , with the subcritical hopf bifurcation for the system without feedback occurring at .recall we have set , and .first , we locate hopf bifurcations of the trivial solution in the - plane , for various values of .figure [ fig : thtp ] shows curves of hopf bifurcations for and .we also plot the curve , given by the period of the bifurcating subcritical orbits ( see figure [ fig : sub ] ) . figure [ fig :thtp ] is qualitatively similar to figure [ fig : thtp ] ( the corresponding figure for the normal form case ) . for , lies inside , and so we expect , by analogy with the normal form case , that choosing will stabilize the pyragus orbit . for , lies outside , and so the feedback can not stabilize the pyragus orbit near onset , since it bifurcates subcritically ( i.e. it coexists with the stable equilibrium from which it bifurcates ) .the codimension - two point occurs at some value of that is the boundary between these cases .we use dde - biftool to locate this codimension - two point . as in the normal form case , we set .we do not have an analytic form for , so we numerically estimate in the following way . for set equal to the period of the bifurcating period orbits for the system with no feedback ( see figure [ fig : sub ] ) .we want to continue into , so we can complete both sides of the bifurcation diagram , so here we set , where , and .this choice of ensures that is continuous and has continuous first derivative at . with the parameter restriction , we generate curves of hopf bifurcations from the zero solution in the - plane ; these are shown in figure [ fig : lor_hopfs ] . we can then estimate the location of the codimension - two point , at , the point where the two curves of hopf bifurcations cross .we find .we follow a path around the codimension - two point and track the amplitude and stability of the bifurcating periodic orbits ; a bifurcation diagram of the periodic orbits is shown in figure [ fig : lor_hopfs ] .the transcritical bifurcation of periodic orbits can clearly be seen .note that figure [ fig : lor_hopfs ] is qualitatively similar to figure [ fig : bifs ] , showing that the bifurcation structure in the normal form case , and in our lorenz example are the same . with the additional feedback ,the pyragus orbits are stable for a wide parameter range . in figure[ fig : po_stab ] , we show the stability of the pyragus orbits as and are varied .the transcritical bifurcation can be seen as the boundary of the stable region which terminates at the codimension - two point .the orbits also undergo an instability at around . along this boundary of the stable region , the periodic orbits have a floquet multiplier equal to , and so the instability is a period - doubling bifurcation . in figure [fig : tplot1 ] we show results of forward time integration of the delay - differential equation , at , with .initally , , and the pyragus orbit is stable .the feedback is then turned off ( i.e. ) at , and the trajectory decays back to the zero solution .we have used dde - biftool to confirm the stability of these orbits .the structure around the codimension - two point also tells us that the delay - induced orbits can be stable in the region .for example , at , , , we can use dde - biftool to show that there exists a stable delay - induced periodic orbit with a period of .figure [ fig : tplot2 ] shows time integration at these parameter values , with feedback turned on at . the figure shows the chaotic attractor for and an approach to a stable periodic orbit for .another natural choice for the gain matrix is a real multiple of the identity . as a comparison to the results given above, we now consider this case , so write for this choice of we can obtain analytical results , and we show that the bifurcation structure is different from our previous example , whatever the value of . we show first that unstable periodic orbits with real positive floquet exponents can not be stabilised using this form of feedback .we then discuss the shape and location of the curves of hopf bifurcation from the origin , in a similar manner to the previous section , to give a comparison of the two types of feedback .the codimension - two point described previously does not exist , and the hopf bifurcation to the pyragus orbit is always subcritical .that is , the periodic orbit in always bifurcates unstably from an equilibrium which is stable in and unstable in .consider a pyragus solution of , which is periodic with period , and with as in .then if is a floquet exponent of ( in the system with no feedback ) , the characteristic equation for in the system with feedback is given by both here , and in the analysis which follows below , simplification is possible because is a multiple of the identity .if has one floquet exponent which is real and positive , then it can be shown ( see e.g. ) that there always exists at least one solution which is real and positive .hence stabilization of can not be achieved .we note that at the subcritical hopf bifurcation from the zero solution in , the orbit which bifurcates has one real stable multiplier ( inherited from the zero solution ) and one neutral multiplier , and so the remaining unstable multiplier must be real . therefore the pyragus orbit can not be stabilized close to the hopf bifurcation using this type of feedback . in addition, we now follow the method of the previous section to find curves of hopf bifurcations of the zero solution of in - space .this provides us with a comparison of the two types of feedback used in this and the previous section .we are particularly interested in those curves which pass through , with hopf frequency equal to , as this is the hopf bifurcation to the pyragus orbit .consider the linearisation of about the origin , and write .then and so for a non - trivial solution ( ) to exist , must satisfy the characteristic equation : =0.\ ] ] note that we have been able to simplify this equation because in this example is a multiple of the identity .this tells us that are the eigenvalues of .when is close to , has one negative eigenvalue , and a complex conjugate pair we denote as .note that , and . we find curves of hopf bifurcations in - space , by writing ( ) and setting equal to the eigenvalues of , .we note that setting equal to the negative eigenvalue of , , does not produce any hopf curves which pass through , so we do not consider these here .equating real and imaginary parts gives equations and describe curves of hopf bifurcations in - space , parameterized by the hopf frequency .figure [ fig : hopf_id ] shows examples of the shape of these hopf curves ; compare with figure [ fig : thtp ] showing the hopf curves in our previous example .the zero solution is unstable to the right of the curves and stable to the left of the curves .we now explain why we expect the curves to have this shape .we have : note that we need for solutions to exist .that is , if , we need , and if , we need .since is a monotonically increasing function over the range of we are considering , this gives a connected range of for which hopf bifurcations can occur , with boundaries at ( since ) and at ( where ) .note that at , , and so this is the hopf bifurcation to the pyragus orbit .the solutions for along the hopf curve solve since , at , , and similarly since , at , .the curve of hopf bifurcations is thus tangent to the lines and , and forms a series of wiggles between these value of .in particular , the curve is tangent to at ) , where , the hopf bifurcation to the pyragus orbit .therefore , for any value of , the pyragus curve ( which originates at , and is also shown in figure [ fig : hopf_id ] ) , will always be to the left of the hopf curves at .the zero equilibrium is therefore stable along the pyragus curve close to the bifurcation point in , and so the pyragus orbit will always bifurcate unstably .figure [ fig : postab ] shows a contour plot of the largest floquet multipliers of the periodic orbit , as and are varied .for all values shown the periodic orbit is unstable .in this paper , we have demonstrated how the mechanism used by fiedler _ et al . _ for stabilizing periodic orbits with one unstable floquet multiplier carries over to higher dimensional systems , using the lorenz equations as an example .we use the results from their idealized example to inform our choice of the feedback gain matrix , and this method follows a set prescription which we expect could be used on other systems . first we find the two - dimensional linear center eigenspace of the system with no feedback at the hopf bifurcation point .feedback is then added in the directions lying tangent to this center subspace , using a gain matrix of the form given by fiedler _ et al .this then leaves only the two parameters and to be chosen .for the example of the lorenz equation , the subcritical orbits are stabilized over a wide range of parameters .we contrast this with an example of choosing the gain matrix as a real multiple of the identity . in this casethe pyragus orbit could not be stabilised . choosing the gain matrix for our lorenz equations example required a knowledge of the linearization of the system at the bifurcation point .this method may also be applicable in systems for which the governing equations are not known , if it is possible to access perturbations to the equilibrium solution near the hopf bifurcation point , and hence extract the unstable eigenvectors numerically from experimental data .we have additionally shown that the lorenz equations example contains a codimension - two point which is also present in the normal form example of .this double - hopf point is not generic . in the normal form example of is an additional symmetry which is not present in the lorenz example .additional structures in the problem force a normally codimension - three phenomena to be codimension - two , since the frequencies of the bifurcating periodic orbits are forced to be in one : one resonance at the codimension - two point .it would be of interest to examine this degeneracy in more detail , by understanding the mathematics behind the structure of the hopf - hopf bifurcation in these examples .we intend to investigate further examples to see how robust this bifurcation structure is , for example , whether it appears in say , the hodgkin huxley or belousov zhabotinsky examples .the authors would like to thank luis mier - y - teran and david barton for assistance with dde - biftool .we are grateful to an anonymous referee for several helpful and detailed suggestions .this research was funded by nsf grant dms-0309667 .k. engelborghs , t. luzyanina and g. samaey , dde - biftool v. 2.00 user manual : a matlab package for bifurcation analysis of delay differential equations , technical report tw-330 , department of computer science , k.u.leuven , leuven , belgium , ( 2001 ) .
|
for many years it was believed that an unstable periodic orbit with an odd number of real floquet multipliers greater than unity can not be stabilized by the time - delayed feedback control mechanism of pyragus . a recent paper by fiedler _ et al . _ uses the normal form of a subcritical hopf bifurcation to give a counterexample to this theorem . using the lorenz equations as an example , we demonstrate that the stabilization mechanism identified by fiedler _ et al . _ for the hopf normal form can also apply to unstable periodic orbits created by subcritical hopf bifurcations in higher - dimensional dynamical systems . our analysis focuses on a particular codimension - two bifurcation that captures the stabilization mechanism in the hopf normal form example , and we show that the same codimension - two bifurcation is present in the lorenz equations with appropriately chosen pyragus - type time - delayed feedback . this example suggests a possible strategy for choosing the feedback gain matrix in pyragus control of unstable periodic orbits that arise from a subcritical hopf bifurcation of a stable equilibrium . in particular , our choice of feedback gain matrix is informed by the fiedler _ et al . _ example , and it works over a broad range of parameters , despite the fact that a center - manifold reduction of the higher - dimensional problem does not lead to their model problem .
|
being the fundamental resource in a wide range of situations in quantum information processing , entanglement is considered as a ` standard currency ' for quantum information tasks , and it is highly desirable to know which states of a given system exhibit a high or maximal amount of entanglement .when it comes to multipartite states this question becomes complicated .there are different _ types _ of entanglement , alongside which there are many different ways to quantify entanglement , each of which may capture a different desirable quality of a state as a resource . in this work , the geometric measure of entanglement , a distance - like entanglement measure , will be investigated to analyze maximally entangled multipartite states .there are several incentives to consider this particular measure .firstly , it has a broad range of operational interpretations : for example , in local state discrimination , additivity of channel capacities and recently for the classification of states as resources for measurement - based quantum computation ( mbqc) .another advantage of the geometric measure is that , while other known entanglement measures are notoriously difficult to compute from their variational definitions , the definition of the geometric measure allows for a comparatively easy calculation .furthermore , the geometric measure can be linked to other distance - like entanglement measures , such as the robustness of entanglement and the relative entropy of entanglement .the function also has applications in signal processing , particularly in the fields of multi - way data analysis , high order statistics and independent component analysis ( ica ) , where it is known under the name _rank one approximation to high order tensors _ .we focus our attention on permutation - symmetric states that is , states that are invariant when swapping any pair of particles .this class of states has been useful for different quantum information tasks ( for example , in leader election ) .it includes the greenberger - horne - zeilinger ( ghz ) states , w states and dicke states , and also occurs in a variety of situations in many - body physics .there has been lots of activity recently in implementing these states experimentally .furthermore , the symmetric properties make them amenable to the analysis of entanglement properties . an important tool in this work will be the majorana representation , a generalization of the bloch sphere representation of single qubits , where a permutation - symmetric state of qubits is unambiguously mapped to points on the surface of the unit sphere .recently , the majorana representation has proved very useful in analyzing entanglement properties of symmetric states .in particular , the geometric measure of entanglement has a natural interpretation , and the majorana representation facilitates exploitation of further symmetries to characterize entanglement . for example , the two - qubit symmetric bell state is represented by an antipodal pair of points : the north pole and the south pole . roughly speaking , symmetric states with a high degree of entanglementare represented by point distributions that are well spread out over the sphere .we will use this idea along with other symmetry arguments to look for the most entangled states . along the way we will compare this problem to other optimization problems of point distributions on the sphere . the paper is organized as follows . in section [ geometric_measure ] , the definition and properties of the geometric measure of entanglement are briefly recapitulated , which is followed by an introduction and discussion of symmetric states in section [ positive_and_symm ] . in section [ majorana_representation ] , the majorana representation of symmetric statesis introduced .the problem of finding the maximally entangled state is phrased in this manner , and is compared to two other point distribution problems on : tth s problem and thomson s problem . in section [ analytic ] ,some theoretical results for symmetric states are derived with the help of the intuitive idea of the majorana representation .the numerically determined maximally entangled symmetric states of up to 12 qubits are presented in section [ maximally_entangled_symmetric_states ] .our results are discussed in section [ discussion ] , and section [ conclusion ] contains the conclusion .the geometric measure of entanglement is a distance - like entanglement measure for pure multipartite states that assesses the entanglement of a state in terms of its remoteness from the set of separable states .it is defined as the maximal overlap of a given pure state with all pure product states and is also defined as the geodesic distance with respect to the fubini - study metric . herewe present it in the inverse logarithmic form of the maximal overlap , which is more convenient in relation to other entanglement measures : is non - negative and zero iff is a product state .we denote a product state closest to by , and it should be noted that a given can have more than one closest product state .indeed , we will usually deal with entangled states that have several closest product states . due to its compactness ,the normalized , pure hilbert space of a finite - dimensional system ( e.g. qudits ) always contains at least one state with maximal entanglement , and to each such state relates at least one closest product state .the task of determining maximal entanglement can be formulated as a max - min problem , with the two extrema not necessarily being unambiguous : it is often more convenient to define so that we obtain . because of the monotonicity of this relationship , the task of finding the maximally entangled state is equivalent to solving the min - max problem as mentioned in the introduction , there are several advantages of this measure of entanglement .first of all , it has several operational interpretations .it has implications for channel capacity and can also be used to give conditions as to when states are useful resources for mbqc .if the entanglement of a set of resource states scales anything below logarithmically with the number of parties , it can not be an efficient resource for deterministic universal mbqc .on the other hand , somewhat surprisingly , if the entanglement is too large , it is also not a good resource for mbqc .if the geometric measure of entanglement of an qubit system scales larger than ( where is some constant ) , then such a computation can be simulated efficiently computationally .of course , we should also note that there are many other quantum information tasks that are not restricted by such requirements .for example , the -qubit ghz state can be considered the most non - local with respect to all possible two - output , two - setting bell inequalities , whereas the geometric measure is only , independent of .the -qubit w state , on the other hand , is the optimal state for leader election with entanglement .indeed , for local state discrimination , the role of entanglement in blocking the ability to access information locally is strictly monotonic the higher the geometric measure of entanglement , the harder it is to access information locally .in addition , the geometric measure has close links to other distance - like entanglement measures , namely the ( global ) robustness of entanglement and the relative entropy of entanglement . between these measures the inequalities hold for all states , and they turn into equalities for stabilizer states ( e.g. ghz state ) , dicke states ( e.g. w state ) and permutation - antisymmetric basis states .an upper bound for the entanglement of pure qubit states is given in as we can see that this allows for states to be more entangled than is useful , e.g. for mbqc . indeed ,although no states of more than two qubits reach this bound , most states of qubits have entanglement . in the next section , we will see that symmetric states have generally lower entanglement .we can also make a general statement for positive states that will help us in calculating entanglement for this smaller class of states . for finite - dimensional systems, a general quantum state can be written in the form with an orthonormalized basis and complex coefficients .we will call a _ real state _if for a given basis all coefficients are real ( ) , and likewise call a _ positive state _ if the coefficients are all positive ( ) .a _ computational basis _ is one made up of tensors of local bases .[ lem_positive ] every state of a finite - dimensional system that is positive with respect to some computational basis has at least one positive closest product state .picking any computational basis in which the coefficients of are all positive , we denote the basis of subsystem with , and can write the state as , with and .a closest product state of can be written as , where ( with ) is the state of subsystem .now define a new product state with positive coefficients as , where .because of , the positive state is a closest product state of .this lemma , which was also shown in , asserts that positive states have at least one positive closest product state , but there can nevertheless exist other non - positive closest product states .a statement analogous to lemma [ lem_positive ] does not hold for real states , and it is easy to find examples of real states that have no real closest product state .from now on we will simply denote entanglement instead of referring to the geometric measure of entanglement .it must be kept in mind , however , that the maximally entangled state of a multipartite system subtly depends on the chosen entanglement measure .in general it is very difficult to find the closest product state of a given quantum state , due to the large amount of parameters in .the problem will be considerably simplified , however , when considering permutation - symmetric states . in experiments with many qubits , it is often not possible to access single qubits individually , necessitating a fully symmetrical treatment of the initial state and the system dynamics .the ground state of the lipkin - meshkov - glick model was found to be permutation - invariant , and its entanglement was quantified in term of the geometric measure and its distance - related cousins . for these reasonsit is worth analyzing various theoretical and experimental aspects of the entanglement of symmetric states , such as entanglement witnesses or experimental setups .the symmetric basis states of a system of qubits are given by the dicke states , the simultaneous eigenstates of the total angular momentum and its -component .they are mathematically expressed as the sum of all permutations of computational basis states with qubits being and being . with , and where we omitted the tensor symbols that mediate between the single qubit spaces .the dicke states constitute an orthonormalized set of basis vectors for the symmetric hilbert space .the notation will sometimes be abbreviated as when the number of qubits is clear .recently , there has been a very active investigation into the conjecture that the closest product state of a symmetric state is symmetric itself .a proof of this seemingly straightforward statement is far from trivial , and after some special cases were proofed , hbener __ were able to extend this result to the general case .they also showed that , for qudits ( general quantum -level systems ) the closest product state of a symmetric state is _ necessarily _ symmetric .this result greatly reduces the complexity of finding the closest product state and thus the entanglement of a symmetric state .a general pure symmetric state of qubits is a linear combination of the symmetric basis states , with the operational nature being that the state remains invariant under the permutation of any two of its subsystems .a closest product state of is i.e. a tensor product of identical single qubit states . from this , the amount of entanglement is found to be this formula straightforwardly gives the maximally entangled dicke state .for even it is and for odd the two equivalent states and . in general , however , the maximally entangled symmetric state of qubits is a superposition of dicke states .nevertheless , equation can be used as a lower bound to the maximal entanglement of symmetric states .this bound can be approximated by the stirling formula for large as .an upper bound to the geometric measure for symmetric qubit states can be easily found from the well - known decomposition of the identity on the symmetric subspace ( denoted , see e.g. ) , where denotes the uniform probability measure over the unit sphere on hilbert space .we can easily see that .hence , for any symmetric state of qubits , the geometric measure of entanglement is upper bounded by an alternative proof that has the benefit of being visually accessible is presented in appendix [ normalization_bloch ] .the maximal symmetric entanglement for qubits thus scales polylogarithmically between and . to compare this with the general non - symmetric case ,consider the lower bound of the maximal qubit entanglement ( even ) of .qubit state with are bipartite bell states , each of which contributes 1 ebit .another example is the 2d cluster state of qubits which has . ]thus the maximal entanglement of general states scales much faster , namely linearly rather than logarithmically .as mentioned , for most states the entanglement is even higher and thus too entangled to be useful for mbqc . while the bounds for symmetric states mean that permutation - symmetric states are never too entangled to be useful for mbqc , unfortunately their scaling is also too low to be good universal deterministic resources . they may nevertheless be candidates for approximate , stochastic mbqc .regardless of their use as resources for mbqc , the comparatively high entanglement of symmetric states still renders them formidable candidates for specific quantum computations or as resources for other tasks , such as the leader election problem and locc discrimination .we end this section by mentioning a simplification with respect to symmetric positive states .states that are symmetric as well as positive in some computational basis are labelled as_ positive symmetric_. from the previous discussion it is clear that such states have a closest product state which is positive symmetric itself , a result first shown in .it should be noted that , while each closest product state of a positive symmetric state is _ necessarily _ symmetric for qudits , it _ need not _ be positive .we can formulate this as a statement akin to lemma [ lem_positive ] .[ lem_positive_symmetric ] every symmetric state of qudits , which is positive in some computational basis , has at least one positive symmetric closest product state .with the discussion of the geometric measure and symmetric states behind us , we have gathered the prerequisites to introduce a crucial tool , the majorana representation .it will help us to understand the amount of entanglement of symmetric states . in classical physics , the angular momentum of a system can be represented by a point on the surface of the 3d unit sphere , which corresponds to the direction of .no such simple representation is possible in quantum mechanics , but majorana pointed out that a pure state of spin- can be uniquely represented by not necessarily distinct points on .this is a generalization of the spin- ( qubit ) case , where the 2d hilbert space is isomorphic to the unit vectors on the bloch sphere .an equivalent representation also exists for permutation - symmetric states of spin- particles . by means of this ` majorana representation ' any symmetric state of qubits be uniquely composed from a sum over all permutations of undistinguishable single qubit states : the normalization factor is in general different for different . by means of equation, the multi - qubit state can be visualized by unordered points ( each of which has a bloch vector pointing in its direction ) on the surface of a sphere .we call these points the _ majorana points _ ( mp ) , and the sphere on which they lie the _ majorana sphere_. with equation , the form of a symmetric state can be explicitly determined if the mps are known .if the mps of a given state are unknown , they can be determined by solving a system of equations . the majorana representation has been rediscovered several times , and has been put to many different uses across physics . in relation to the foundations of quantum mechanics , it has been used to find efficient proofs of the kochen - specker theorem and to study the ` quantumness ' of pure quantum states in several respects , as well as the approach to classicality in terms of the discriminability of states .it has also been used to study berry phases in high spin and quantum chaos . within many - body physicsit has been used for finding solutions to the lipkin - meshkov - glick model , and for studying and identifying phases in spinor bec .it has also been used to look for optimal resources for reference frame alignment and for phase estimation .recently , the majorana representation has also become a useful tool in studying the entanglement of permutation - symmetric states .it has been used to search for and characterize different classes of entanglement , which have interesting mirrors in the classification of phases in spinor condensates .of particular interest , in this work , is that it gives a natural visual interpretation of the geometric measure of entanglement , and we will see how symmetries in the point distributions can be used to calculate the entanglement and assist in finding the most entangled states. the connection to entanglement can first be noticed by the fact that the point distribution is invariant under local unitary maps . applying an arbitrary single - qubit unitary operation to each of the subsystems yields the lu map and from equationit follows that in other words , the symmetric state is mapped to another symmetric state , and the mp distribution of is obtained by a joint rotation of the mp distribution of on the majorana sphere along a common axis . therefore and have different mps , but the same _ relative _ distribution of the mps , and the entanglement remains unchanged .when it comes to the geometric measure of entanglement , we can be even more precise .for qubits , every closest product state of a symmetric state is symmetric itself , so that one can write with a single qubit state , and visualize by the bloch vector of .in analogy to the majorana points , we refer to as a _ closest product point _ ( cpp ) . for the calculation of the geometric measure of entanglement , the overlap with a symmetric product state is the task of determining the cpp of a given symmetric state is thus equivalent to maximizing the absolute value of a product of scalar products . from a geometrical point of view ,the are the angles between the two corresponding points on the majorana sphere , and thus the determination of the cpp can be viewed as an optimization problem for a product of geometrical angles .we will now demonstrate the majorana representation for two and three qubit symmetric states .the case of two qubits is very simple , because any distribution of two points can be rotated on the majorana sphere in a way that both mps are positive , with and for some ] .the form of the underlying quantum state follows from equation as this state is positive , so lemma [ lem_positive_symmetric ] asserts the existence of at least one positive cpp . with the ansatz for the cpp , the position of the cppis found by calculating the absolute maximum of . from thisit is found that the parameter of the cpp depends on the parameter of the mps as follows : the permitted values of the left - hand side are [ 0,1 ] , but the right - hand side lies outside this range for . for these values the cppis fixed at .figure [ 3_graph ] shows how the cpp parameter changes with .it is seen that from onwards , the cpp abruptly leaves the north pole and moves towards the south pole along the prime meridian . from equations andthe amount of entanglement is easily calculated and is displayed in figure [ 3_graph ] . is monotonously increasing and reaches a saddle point at the ghz state ( ) .the main point of interest in this paper is the study of maximally entangled symmetric states . for thisthe majorana representation is extremely helpful , because it allows the optimization problem of maximizing the entanglement to be written in a simple form . with the help of equation, the min - max problem for finding the maximally entangled state can be reformulated as this ` majorana problem ' bears all the properties of an optimization problem on the surface of a sphere in .these kinds of problems deal with arrangements of a finite number of points on a sphere so that an extremal property is fulfilled .two well - known members , tth s problem and thomson s problem , have been extensively studied in the past .* tth s problem , * also known as fejes problem and tammes problem , asks how points have to be distributed on the unit sphere so that the minimum distance of all pairs of points becomes maximal .this problem was first raised by the biologist tammes in 1930 when trying to explain the observed distribution of pores on pollen grains .recasting the points as unit vectors , the following cost function needs to be maximized : the point configuration that solves this problem is called a spherical code or sphere packing .the latter term refers to the equivalent problem of placing identical spheres of maximal possible radius around a central unit sphere , touching the unit sphere at the points that solve tth s problem . *thomson s problem , * also known as the coulomb problem , asks how point charges can be distributed on the surface of a sphere so that the potential energy is minimized .the charges interact with each other only through coulomb s inverse square law .devised by j. j. thomson in 1904 , this problem raises the question about the stable patterns of up to 100 electrons on a spherical surface .its cost function is given by the coulomb energy and needs to be minimized . the original motivation for thomson s problem was to determine the stable electron distribution of atoms in the plum pudding model .while this model has been superseded by modern quantum theory , there is a wide array of novel applications for thomson s problem or its generalization to other interaction potentials . among theseare multi - electron bubbles in liquid , surface ordering of liquid metal drops confined in paul traps , the shell structure of spherical viruses , ` colloidosomes ' for encapsulating biochemically active substances , fullerene patterns of carbon atoms and the abrikosov lattice of vortices in superconducting metal shells .exact solutions to tth s problem are only known for points , and in thomson s problem for points . despite the different definitions of the two problems, they share the same solutions for points .numerical solutions are , furthermore , known for a wide range of in both problems . the solutions to are trivial and given by the dipole and equilateral triangle , respectively .for the platonic solids are natural candidates , but they are the actual solutions only for . for solutions are not platonic solids and are different for the two problems .we will cover the solutions for in more detail alongside the majorana problem in section [ maximally_entangled_symmetric_states ] . on symmetry grounds, one could expect that the center of mass of the points always coincides with the sphere s middle point .this is , however , not the case , as the solution to tth s problem for or the solution to thomson s problem for shows .furthermore , the solutions need not be unique . for tth s problem , the first incident of this is , and for thomson s problem at and .these aspects show that it is , in general , hard to make statements about the form of the ` most spread out ' point distributions on the sphere .the majorana problem is considered to be equally tricky , particularly with the normalization factor depending on the mps .furthermore , the mps of the solution need not all be spread out far from each other , as demonstrated by the three qubit state with its two coinciding mps .in this section , results for the interdependence between the form of qubit symmetric states and their majorana representation will be derived .more specifically , it will be examined what the distributions of mps and cpps look like for states whose coefficients are real , positive or vanishing . in some of these cases the mps or cpps have distinct patterns on the sphere , which can be described by symmetries . in this context, care has to be taken as to the meaning of the word ` symmetric ' .permutation-_symmetric _ states were introduced in section [ positive_and_symm ] , and only these states can be represented by point distributions on the majorana sphere . for some of these symmetric states , their mp distribution exhibits symmetry properties on the sphere .examples of this can be found in figure [ ghz_w_pic ] , where the ghz state and w state have _ rotational symmetries _ around the z - axis , as well as _ reflective symmetries _ along some planes .let be a general symmetric state of qubits . to understand the relationship between the state s coefficients and the majorana representation , consider the effect of symmetric lus .a symmetric lu acting on the hilbert space of an qubit system is defined as the -fold tensor product of a single - qubit unitary operation : .this is precisely the lu map that was shown in equation and to map every symmetric state to another symmetric state . considering the hilbert space of a single qubit ,the rotation operator for -axis rotations of the qubit is and the rotation operator for -axis rotations of the qubit is changes the relative phase , but not the absolute value of the qubit s coefficients .conversely , changes the absolute value , but not the relative phase of the coefficients . from equationit is easily seen that and pass this behavior on to the symmetric lus and .for example , the effect of on is from this it is easy to determine the conditions for the mps of having a rotational symmetry around the z - axis , i.e. ( up to a global phase ) for some . from equationit is clear that the possible rotational angles ( up to multiples ) are restricted to , with , .the necessary and sufficient conditions are : [ rot_symm ] the mp distribution of a symmetric qubit state is rotationally symmetric around the -axis with rotational angle ( ) iff equation is equivalent to : . from thisit follows that , with . in other words ,a sufficient number of coefficients need to vanish , and the spacings between the remaining coefficients must be multiples of .for example , a symmetric state of the form is rotationally symmetric with , because the spacings between non - vanishing coefficients are multiples of .[ maj_max ] every maximally entangled symmetric state of qubits has at least two different cpps .the cases are trivial , because their maximally entangled states ( bell states and w state , respectively ) have an infinite number of cpps . for , we consider a symmetric state with only one cpp and show that can not be maximally entangled . because of the lu invariance on the majorana sphere ( cf .equation ) , we can take the cpp to be the north pole , i.e. . denoting a single qubit with , the smooth and continuous overlap function then has its absolute maximum at .for any other local maximum , the value of is smaller than and therefore an infinitesimal change in the mps of can not lead to a cpp outside a small neighborhood of the north pole .we will now present an explicit variation of that increases the entanglement . has complex coefficients that fulfil , as well as and .can be set positive by means of the global phase , and for symmetric states with as a cpp it is easy to verify that is a necessary condition for the partial derivatives of being zero at . ] define the variation as , with .this state fulfils the requirement , and is normalized : .we now investigate the values of around the north pole . in this area , hence for small , but nonzero and . therefore the absolute maximum of is smaller than that of , and is more entangled than . for symmetric states with real coefficients , the following lemma asserts a reflection symmetry of the mps and cpps with respect to the --plane that cuts the majorana sphere in half .in mathematical terms , the mps and cpps exhibit a reflection symmetry with respect to the --plane iff for each mp is also a mp , and the same holds for cpps too .[ maj_real ] let be a symmetric state of qubits . is real iff all its mps are reflective symmetric with respect to the --plane of the majorana sphere .( ) let be a real state .then , and since majorana representations are unique , has the same mps as .therefore the complex conjugate of each non - real mp is also a mp .( ) let the mps of be symmetric with respect to the --plane .then for every nonreal mp its complex conjugate is also a mp .because is real , it becomes clear , from the permutation over all mps in equation , that the overall state is real , too .the reflective symmetry of the mps naturally leads to the same symmetry for the cpps .[ cpp_real ] let be a symmetric state of qubits .if is real , then all its cpps are reflective symmetric with respect to the --plane of the majorana sphere .lemma [ maj_real ] asserts that for every mp of , the complex conjugate is also a mp . by considering the complex conjugate of the optimization problem ,it becomes clear that for any cpp the complex conjugate is also a cpp . for symmetric states with positive coefficients, strong results can be obtained with regard to the number and locations of the cpps . in particular , for non - dicke states it is shown that there are at most cpps and that non - positive cpps can only exist if the mp distribution has a rotational symmetry around the z - axis .furthermore , the cpps can only lie at specified azimuthal angles on the sphere , namely those that are ` projected ' from the meridian of positive bloch vectors by means of the z - axis rotational symmetry ( see , e.g. , the positive seven qubit state shown in figure [ bloch_7 ] ) .dicke states constitute a special case due to their continuous azimuthal symmetry . the two dicke states and are product states , with all their mps and cpps lying on the north and the south pole , respectively . for any other dicke state the mps are shared between the two poles , and the cpps form a continuous horizontal ring with inclination . [ cpp_mer ]let be a positive symmetric state of qubits , excluding the dicke states . 1 .if is rotationally symmetric around the z - axis with minimal rotational angle , then all its cpps are restricted to the azimuthal angles given by with .furthermore , if is a cpp for some , then it is also a cpp for all other values of .2 . if is not rotationally symmetric around the z - axis , then all its cpps are positive .the proof runs similar to the one of lemma [ lem_positive ] , where the existence of at least one positive cpp is established .we use the notations with , and .case ( a ) : consider a non - positive cpp with and , and define . then .if this inequality is strict , then is not a cpp .this would be a contradiction , so it must be an equality .thus , for any two indices and of non - vanishing coefficients and , the following must hold : .this can be reformulated as , or equivalently because is the minimal rotational angle , is the largest integer that satisfies equation , and thus there exist and with s.t . . from this and from equation, it follows that .therefore is a cpp if and only if is an integer .case ( b ) : considering a cpp with and , we need to show that . defining , and using the same line of argumentation as above , the equation must hold for any pair of non - vanishing and is equivalent to or , in particular .if there exist indices and of non - vanishing coefficients s.t . , then , as desired . otherwise consider with coprime , . because is not rotationally symmetric, the negation of lemma [ rot_symm ] yields that , for any two and ( ) with , there must exist a different pair and ( ) with s.t . is not a multiple of and vice versa . from and equation, it follows that as well as .this is a contradiction , so . with this result about the confinement of the cpps to certain azimuthal angles , it is possible to derive upper bounds on the number of cpps . [ maj_max_pos_zero ] the majorana representation of every positive symmetric state of qubits , excluding the dicke states , belongs to one of the following three mutually exclusive classes . 1 . is rotationally symmetric around the z - axis , with only the two poles as possible cpps . is rotationally symmetric around the z - axis , with at least one cpp being non - positive . is not rotationally symmetric around the z - axis , and all cpps are positive . regarding the cpps of states from class ( b ) and ( c ) ,the following assertions can be made for : 1 .if both poles are occupied by at least one mp each , then there are at most cpps , else there are at most cpps .there are at most cpps .is the smallest integer not less than . ]starting with the first part of the theorem , case ( c ) has already been shown in lemma [ cpp_mer ] , so consider states with a rotational symmetry around the z - axis .if all cpps are either or , then we have case ( a ) , otherwise there is at least one cpp which does not lie on a pole .if this is non - positive , then we have case ( b ) , and if is positive , then lemma [ cpp_mer ] states the existence of another , non - positive cpp , thus again resulting in case ( b ) .the proof of the second part of the theorem is a bit involved and can be found in appendix [ max_cpp_number ] .in this section , we present the numerically determined solutions of the majorana problem for up to 12 qubits . in order to find these ,the results from the previous sections were extremely helpful .this is because an exhaustive search over the set of all symmetric states quickly becomes unfeasible , even for low numbers of qubits and because the min - max problem is too complex to allow for straightforward analytic solutions . among the results particularly helpful for our search are lemma [ cpp_mer ] and theorem [ maj_max_pos_zero ] . for positive statesthey strongly restrict the possible locations of cpps , and thus greatly simplify the calculation of the entanglement .it then suffices to determine only the positive cpps because all other cpps automatically follow from the z - axis symmetry ( if any exists ) .we will see that this result is especially powerful for the platonic solids in the cases where the location of the cpps can be immediately determined from this argument alone , without the need to do any calculations . from the definition of , it is clear that the exact amount of entanglement of a given state ( or its corresponding mp distribution ) is automatically known once the location of at least one cpp is known .a numerical search over the set of positive symmetric states often detects the maximally entangled state to be of a particularly simple form , enabling us to express the exact positions of its mps and cpps analytically . in some cases ,however , no analytical expressions were found for the positions of the cpps and/or mps . in these casesthe exact amount of entanglement remains unknown , although it can be numerically approximated with high precision . in this waywe can be quite confident of finding the maximally entangled positive symmetric state . in the general symmetric casewe do not have as many tools , so the search is over a far bigger set of possible states , and we can be less confident in our results .we therefore focus our search to sets of states that promised high entanglement .such states include those with highly spread out mp distributions , particularly the solutions of the classical optimization problems . from these results we can propose candidates for maximal symmetric entanglement .for two and three qubits , the maximally entangled symmetric states are known and were discussed in section [ majorana_representation ] , so we start with four qubits .a table summarizing the amount of entanglement of the numerically determined maximally entangled positive symmetric as well as the entanglement of the candidates for the general symmetric case can be found in appendix [ entanglement_table ] . for four points , both tth s and thomson s problem are solved by the vertices of the regular tetrahedron . the numerical search for the maximally entangled symmetric state returns the platonic solid too .recast as mps , the vertices are the symmetric state constructed from these mps by means of equation shall be referred to as the ` tetrahedron state ' .its form is , and its mp distribution is shown in figure [ bloch_4 ] . because the state is positive and has a rotational symmetry around the z - axis , lemma [ cpp_mer ] restricts the possible cpp locations to the three half - circles with .figure4 ( 30,86) ( -11,21) ( 66,16) ( 68,48) from the symmetry of the platonic solid it is clear that the mp distribution of figure [ bloch_4 ] can be rotated s.t . , or is moved to the north pole , with the actual distribution ( and thus ) remaining unchanged .each of these rotations , however , gives rise to new restrictions on the location of the cpps mediated by lemma [ cpp_mer ] .the intersections of all these restrictions are the four points where the mps lie .this yields the result that has four cpps , with their bloch vectors being the same as those in equation . from thisthe amount of entanglement follows as . for five points , the solution to thomson s problemis given by three of the charges lying on the vertices of an equatorial triangle and the other two lying at the poles .this is also a solution to tth s problem , but it is not unique .the corresponding quantum state , the ` trigonal bipyramid state ' , is shown in figure [ bloch_5](a ) .this state has the form , and a simple calculation yields that it has three cpps that coincide with the equatorial mps , giving an entanglement of .figure5a ( -7,0)(a ) ( 31,83) ( -18,49) ( 74,30) ( 67,63) ( 31,10) figure5b ( -7,0)(b ) ( 31,85) ( 20,16) ( 64,20 ) ( 74,49) ( 4,48) ( -10,17) however , a numerical search among all positive symmetric states yields states with higher entanglement .our numerics indicate that the maximally entangled state is the ` square pyramid state ' shown in figure [ bloch_5](b ) .this state has five cpps , one on the north pole and the other four lying in a horizontal plane slightly below the plane with the mps .the form of this state is its mps are with , and the cpps are with .the exact values can be determined analytically by solving quartic equations .the of equation is given by the real root of , and this can be used to calculate . with the substitution the value of given by the real root of .approximate values of these quantities are : the entanglement is , which is considerably higher than that of .we remark that the ` center of mass ' of the five mps of does not coincide with the sphere s origin , thus ruling out the corresponding spin- state as an anticoherent spin state , as defined in .the regular octahedron , a platonic solid , is the unique solution to tth s and thomson s problem for six points .the corresponding ` octahedron state ' was numerically confirmed to solve the majorana problem for six qubits .figure6a ( -7.5,0)(a ) ( 32,84) ( 32,9) ( -17,48) ( 57,31) ( 79,56) ( 27,56) figure6b ( -7.5,0)(b ) ( 32,89) ( 31,8) ( 11,32) ( 78,33) ( 77,62) ( 7,62) the straightforward orientation shown in figure [ bloch_6_1](a ) has the form , and its mps are with . can be turned into the positive state by means of an rotation .the cpps can be obtained from this state in the same way as for the tetrahedron state .being a platonic solid , the mp distribution of figure [ bloch_6_1](b ) is left invariant under a finite subgroup of rotation operations on the sphere . from lemma [ cpp_mer ] , the intersection of the permissible locations of the cppsis found to be the eight points lying at the center of each face of the octahedron , forming a cube inside the majorana sphere . with .in contrast to the tetrahedron state , where the mps and cpps overlap , the cpps of the octahedron state lie as far away from the mps as possible . this is plausible , because in the case of the octahedron equation is zero at the location of any mp , due to the mps forming diametrically opposite pairs .the amount of entanglement is . for seven points , the solutions to the two classical problemsbecome fundamentally different for the first time .tth s problem is solved by two triangles asymmetrically positioned about the equator and the remaining point at the north pole , or ( 1 - 3 - 3 ) in the fppl notation .thomson s problem is solved by the vertices of a pentagonal dipyramid , where five points lie on an equatorial pentagon and the other two on the poles .the latter is also numerically found to be the solution to the majorana problem .figure7 ( 38,103) ( 34,-8) ( -16,50) ( 20,33) ( 77,35) ( 70,62) ( 13,62) ( 2,88) ( 3,6) the ` pentagonal dipyramid state ' , shown in figure [ bloch_7 ] , has the form , and its mps are with .the cpps of this positive state can be determined analytically by choosing a suitable parametrization . with position of and is determined by the real root of the cubic equation in the interval ] .the approximate amount of entanglement is .the solution to tth s problem is an arrangement of the form ( 2 - 2 - 4 - 2 ) , while thomson s problem is solved by the gyroelongated square bipyramid , a deltahedron that arises from a cubic antiprism by placing square pyramids on each of the two square surfaces . the ten - qubit case is distinct in two respects .it is the first case where the numerically determined positive solution is not rotationally symmetric around any axis .furthermore , we found non - positive states with higher entanglement than the conjectured solution for positive states .a numerical search returns a state of the form as the positive state with the highest entanglement , namely . from lemma [ rot_symm ]it is clear that this state is not rotationally symmetric around the z - axis .the mp distribution is shown in figure [ bloch_10](a ) .the state has only three cpps , which are all positive ( cf .theorem [ maj_max_pos_zero ] ) , but there are six other local maxima of with values close to the cpps .their positions are shown by dashed crosses in figure [ bloch_10](a ) .while the total mp distribution is not rotationally symmetric around the z - axis , one would expect from the numerical results that the mps form two horizontal planes , one with five mps and another with four mps , with equidistantly spread out mps . however , this is not the case , as the locations of the mps deviate by small , but significant , amounts from this simple form .figure10a ( -7.5,0)(a ) figure10b ( -7.5,0)(b ) figure10c ( -7.5,0)(c ) interestingly , there is a fully rotationally symmetric positive state that comes very close to in terms of entanglement .its straightforward form is , as displayed in figure [ bloch_10](b ) .the 12 cpps of this state are easily found as the solutions of a quadratic equation .the two positive cpps are and the entanglement is .this is less than difference from .the solution to thomson s problem , recast as a quantum state of the form , is not positive and has an entanglement of . from numericsone can see that the entanglement of this state can be further increased by slightly modifying the coefficients , arriving at a state with eight cpps and an entanglement of .the state is shown in figure [ bloch_10](c ) , and we propose it as a candidate for the maximally entangled symmetric state of ten qubits . the solution to tth s problem is a pentagonal antiprism with a pentagonal pyramid on one of the two pentagonal surfaces , or ( 1 - 5 - 5 ) . the solution to thomson s problem is of the form ( 1 - 2 - 4 - 2 - 2 ) . analogous to the ten - qubit case , the numerically found positive state of 11 qubits with maximal entanglement is not rotationally symmetric .the state , shown in figure [ bloch_11](a ) , is of the form , and its entanglement is .the state has only two cpps , but there exist seven more local maxima with values close to the cpps .figure11a ( -7.5,0)(a ) figure11b ( -7.5,0)(b ) the solution to tth s problem , which is of the form , yields very low entanglement , but by modifying the coefficients of this non - positive state one can find a state which is even more entangled than . as shown in figure [ bloch_11](b ) , the state is rotationally symmetric around the z - axis and has 11 cpps .the entanglement is , making the state the potentially maximally entangled state of 11 qubits . for 12 points ,both of the classical problems are solved by the icosahedron , a platonic solid . because the icosahedron can not be cast as a positive state , the numerical search for positive states yields a different state of the form . from figure [ bloch_12](a ) it can be seen that this state can be thought of as an icosahedron with one circle of mps rotated by 36 so that it is aligned with the mps of the other circle .there are three circles of cpps with five in each circle .one of these planes coincides with the equator , so that is a trivial cpp . nevertheless , the exact location of some of the mps and cpps are unknown .the approximate entanglement is .figure12a ( -7.5,0)(a ) figure12b ( -7.5,0)(b ) due to the high symmetry present in platonic solids , the ` icosahedron state ' is a strong candidate for maximal symmetric entanglement of twelve qubit states .the state can be cast with real coefficients , and its mps can be easily derived from the known angles and distances in the icosahedron . with . from numerics and from the z - axis rotational symmetry ,it is evident that there are 20 cpps , one at the center of each face of the icosahedron .equivalent to the six - qubit case , the mps appear as diametrically opposite pairs , forcing the cpps to be as remote from the mps as possible .the cpps are with and with the knowledge of the exact positions of the mps and cpps , the entanglement of the icosahedron state can be calculated as .figure [ icosahedronpic ] shows a spherical plot of the overlap function from the same viewpoint as in figure [ bloch_12](b ) . due to their diametrically opposite pairs , the mpscoincide with the zeros in this plot .the cpps can be identified as the maxima of .the mp distribution of highly entangled states can be explained with the overlap function seen in figure [ icosahedronpic ] .appendix [ normalization_bloch ] states that the integration volume of over the sphere is the same for all symmetric states .therefore a bunching of the mps in a small area of the sphere would lead to high values of in that area , thus lowering the entanglement .this explains the tendency of mps to spread out as far as possible , as it is seen for the classical problems .however , there also exist highly entangled states where two or more mps coincide ( as seen for ) .this is intriguing because such configurations are the least optimal ones for classical problems .again , this can be explained with the constant integration volume of .because of latexmath:[$g(\sigma)^2 \propto \prod_i are the diametrically opposite points ( antipodes ) of the mps and therefore a lower number of _ different _ mps leads to a lower number of zeros in .this can lead to the integration volume being more evenly distributed over the sphere , thus yielding a higher amount of entanglement . excluding the dicke states with their infinite amount of cpps, one observes that highly entangled states tend to have a large number of cpps .the prime example for this is the case of five qubits , where the classical solution with only three cpps is less entangled than the ` square pyramid ' state that has five cpps . in theorem [ maj_max_pos_zero ]it was shown that is an upper bound on the number of cpps of positive symmetric qubit states .for all of our numerically determined maximally entangled states including the non - positive ones this bound is obeyed , and for most states the number of cpps is close to the bound ( ) or coincides with it ( ) .this raises the question whether this bound also holds for general symmetric states . to date , neither proof nor counterexample is known . when viewing the mps of a symmetric state as the edges of a polyhedron , euler s formula for convex polyhedra yields the upper bound on the number of faces .this bound is strict if no pair of mps coincides and all polyhedral faces are triangles .intriguingly , this bound is the same as the one for cpps mentioned in the previous paragraph , and the polyhedral faces of our numerical solutions come close to the bound ( ) or coincide with it ( ) .the faces of the polyhedron associated with the majorana representation might therefore hold the key to a proof for being the upper bound on the number of cpps for all symmetric states ( with only the dicke states excluded ) .the case of ten qubits seems to be the first one where the maximally entangled symmetric state can not be cast as a positive state . for ,our candidates for maximal entanglement are real states , so the question remains whether the maximally entangled state can always be cast with real coefficients .we consider this unlikely , firstly because of the higher amount of mp freedom in the general case and secondly because many of the solutions to the classical problems can not be cast as real mp distributions for higher . for thomson s problem ,the first distribution without any symmetry ( and thus no representation as a real state ) arises at , and for tth s problem at .upper and lower bounds on the maximal entanglement of symmetric states have already been discussed in section [ positive_and_symm ] , with a new proof for the upper bound given in appendix [ normalization_bloch ] .stronger lower bounds can be computed from the known solutions to tth s and thomson s problems by translating their point distribution into the corresponding symmetric state and determining its entanglement .the diagram in figure [ e_graph ] displays the entanglement of our numerical solutions , together with all bounds .figure14 for qubits , the maximally entangled state can not be symmetric or turned into one by locc , because the lower bound on general states is higher than the upper bound of symmetric states .for , the maximally entangled state ( w state ) is demonstrably symmetric , but for the numerical solutions for symmetric states have less entanglement than the lower bound of general states .this would imply that qubit maximally entangled states can be symmetric if , and only if , .in this paper , we have investigated the maximally entangled state among symmetric quantum states of qubits . by visualizing symmetric states through the majorana representation and with the help of analytical and numerical methods , strong candidates for the maximally entangled symmetric states of up to 12 qubitswere found . a comparison with the extremal distributions of tth s and thomson s problems shows that , in some cases , the optimal solution to majorana s problem coincides with that of the two classical problems , but in other cases it significantly differs . lower and upper bounds show that the maximal entanglement of permutation - symmetric qubit states scales between and with the number of qubits . with respect to mbqc ,these results indicate that , although permutation - symmetric states may not be good resources for deterministic mbqc , they may be good for stochastic approximate mbqc .it also gives bounds on how much information can be discriminated locally , for which explicit protocols are known in some cases ( in particular for dicke states ) .we remark that , due to the close relationship of distance - like entanglement measures , the results for the geometric measure give bounds to the robustness of entanglement and the relative entropy of entanglement , which can be shown to be tight in certain cases of high symmetry .we finally note that a similar study has been carried out , although from a different perspective , in search of the ` least classical ' state of a single spin- system ( which they call the ` queens of quantumness ' ) . therethe majorana representation is used to display spin- states , through the well - known isomorphism between a single spin- system and the symmetric state of spin- systems . in this context, the most ` classical ' state is the spin coherent state , which corresponds exactly to a symmetric product state in our case ( i.e. coinciding mps ) .the problem of is similar to ours in that they look for the state ` furthest away ' from the set of spin coherent states .however , different distance functions are used , so the optimization problem is subtly different and again yields different solutions .it is in any case interesting to note that our results also have interpretations in this context and vice versa .the authors thank s. miyashita , s. virmani , a. soeda and k .- h .borgwardt for very helpful discussions .this work was supported by the national research foundation & ministry of education , singapore , and the project ` quantum computation : theory and feasibility ' in the framework of the cnrs - jst strategic french - japanese cooperative program on ict .mm thanks the ` special coordination funds for promoting science and technology ' for financial support . _ note added ._ during the completion of this manuscript , we became aware of very similar work that also looks at the maximum entanglement of permutation - symmetric states using very similar techniques .a symmetric qubit state can be written as with , and the normalization condition . writing the closest product state as with , we obtain using the set of qubit unit vectors and the uniform measure over the unit sphere , the squared norm of equation can be integrated over the majorana sphere : taking into account that for any integer , one obtains [ calc_mean ] \sin \theta \ , { \text{d}}\theta { \text{d}}\varphi \enspace , \label{calc_mean_1 } \\ & = 2 \pi \sum_{k = 0}^{n } a_k^2 { \binom{n}{k } } \int\limits_{0}^{\pi } { \text{c}}_{\theta}^{2(n - k ) } { \text{s}}_{\theta}^{2k } \sin \theta \ , { \text{d}}\theta \enspace , \label{calc_mean_2 } \\ & = 4 \pi \sum_{k = 0}^{n } a_k^2 { \binom{n}{k } } \frac{\gamma ( k+1 ) \gamma ( n - k+1)}{\gamma ( n+2 ) } \enspace , \label{calc_mean_4 } \\ & = 4 \pi \sum_{k = 0}^{n } a_k^2 \frac{1}{n+1 } = \frac{4 \pi}{n+1 } \enspace .\label{calc_mean_5 } \end{aligned}\ ] ] the equivalence of equations and follows from the different definitions of the beta function . since the mean value of over the majorana sphere is , it follows that , or , for any symmetric qubit state .this result was first shown by r. renner in his phd thesis , using a similar proof that employs an explicit separable decomposition of the identity over symmetric subspace . the same proof as ours was independently found by j. martin _class ( b ) : we consider states that have a z - axis rotational symmetry with minimal rotational angle , and . figure [ symm_example ] shows an example for . due to the rotational z - axis symmetry and the reflective symmetry imposed by theorem [ maj_real ] ,the mps are restricted to specific distribution patters .an arbitrary number of mps can lie on each of the poles , with the remaining mps equidistantly aligned along horizontal circles .the figure shows the two principal types of horizontal circles that can exist .the upper one is the basic type for positive states of five qubits , and the lower one a special case where two basic circles are intertwined at azimuthal angle from the position of the single basic circle , respectively .all conceivable horizontal circles of mps can be decomposed into these two principal types .figure15 ( 36.3,39.3) ( 43.8,39.0) according to equation , any cpp maximizes the function , where the are the mps . from lemma [ lem_positive ]it follows that there must be at least one positive . for a mp distribution with mps on the north pole , mps on the south pole and the remaining mps on horizontal circles ,the function to maximize is where represents the factors contributed by a single basic circle with mps at inclination , and represents the factors contributed by two basic circles with mps intertwined at azimuthal angles , and inclination .it is easy to verify that from this it is clear that can be written in the form where the are positive - valued coefficients , and is the number of basic circles ( ) .the number of zeros of in gives a bound on the number of positive cpps .the form of is qualitatively different for and . with the substitution equation for becomes this is a real polynomial in , with the first and last coefficient vanishing if no mps exist on the south pole ( ) and north pole ( ) , respectively .descartes rule of signs states that the number of positive roots of a real polynomial is at most the number of sign differences between consecutive nonzero coefficients , ordered by descending variable exponent . from this and the fact that the codomain of is , we obtain the result that for there are at most , or extrema of lying in , depending on whether and are zero or not . for , we obtain the analogous result from descartes rule of signs , we find that there exist , or extrema of in , depending on whether and are zero or not . with these resultsit is easy to determine the maximum number of global maxima of , which are the positive cpps .case differentiations have to be done with regard to or , whether and are zero or not and whether is even or odd . due to the rotational z - axis symmetry, the non - positive cpps can be immediately obtained . for any positivecpp not lying on a pole , there are other , non - positive cpps lying at the same inclination ( cf. lemma [ cpp_mer ] ) . for ,the maximum possible number of cpps is ( even ) or ( odd ) .this is significantly less than that in the general case , where the maximum number of cpps is .interestingly , this bound decreases to if at least one of the two poles is free of mps. class ( c ) : all mps of a positive state must either lie on the positive meridian or form complex conjugate pairs ( cf .lemma [ maj_real ] ) . from thisthe optimization function is easily derived as with real .calculating yields the condition for the extrema : from this , the maximum number of cpps can be derived with descartes rule .all cpps are now restricted to the positive meridian and the poles , yielding at most cpps for odd and for even ..[enttable ] the table lists the known ( ) and numerically determined ( ) values of the maximal entanglement of symmetric qubit states .the left column lists the extremal entanglement among positive symmetric states , and , where more entangled non - positive symmetric states are known , they are displayed in the right column . [cols="^,^,^",options="header " , ] a trivial example of an qubit state with are bipartite bell states , each of which contributes 1 ebit .another example is the 2d cluster state of qubits , which has . can be set positive by means of the global phase , and for symmetric states with as a cpp it is easy to verify that is a necessary condition for the partial derivatives of being zero at .
|
the geometric measure of entanglement is investigated for permutation symmetric pure states of multipartite qubit systems , in particular the question of maximum entanglement . this is done with the help of the majorana representation , which maps an qubit symmetric state to points on the unit sphere . it is shown how symmetries of the point distribution can be exploited to simplify the calculation of entanglement and also help find the maximally entangled symmetric state . using a combination of analytical and numerical results , the most entangled symmetric states for up to 12 qubits are explored and discussed . the optimization problem on the sphere presented here is then compared with two classical optimization problems on the sphere , namely tth s problem and thomson s problem , and it is observed that , in general , they are different problems . = 1
|
triz theory of inventive problem solving was developed by genrikh altshuller in the soviet union in the mid-20th century .the ideas which led to development of triz emerged when the author was working in 1946 in patent office . in the next several decades after that altshuller and his team analyzed many thousands of patents , trying to discover patterns to identify what makes a patent successful .following his work in the patent office , between 1956 and 1985 he formulated triz and , together with his team , developed it further .since then , triz has gradually become one of the most powerful tools in the industrial world .for example , in his 7 march 2013 contribution to the business magazine forbes , `` what makes samsung such an innovative company ? '' , haydn shaughnessy wrote that triz `` became the bedrock of innovation at samsung '' , and that `` triz is now an obligatory skill set if you want to advance within samsung '' .the authors of triz devised the following four cornerstones for the method : the same problems and solutions appear again and again but in different industries ; there is a recognizable technological evolution path for all industries ; innovative patents ( which are about a quarter of the total ) use science and engineering theories outside of their own area or industry ; and an innovative patent uncovers and solves contradictions .in addition , the team created a detailed methodology , which employs tables of typical contradicting parameters and a wonderfully universal table of 40 inventive principles .the triz method consists in finding a pair of contradicting parameters in a problem , which , using the triz inventive tables , immediately leads to the selection of only a few suitable inventive principles that narrow down the choice and result in a faster solution to a problem .triz method of inventiveness , although created originally for engineering , is universal and can also be applied to science .indeed , triz methodology provides another way to look at the world combined with science it creates a powerful and eye - opening amalgam of science and inventiveness .triz is particularly helpful for building bridges of understanding between completely different scientific disciplines , and therefore its should be in particular useful to educational and research organizations that endeavor to break barriers between disciplines .it is this ability to create bridges between different disciplines that inspired the authors to develop novel textbooks that connect physics of accelerators , lasers and plasma via the art and methodology of inventiveness .we will refer to the methods developed in these books further below .still , experience shows that knowledge of triz is nearly non - existent in the scientific departments of western universities .moreover , it is not unusual to hear about unsuccessful attempts to introduce triz into the graduate courses of universities science departments .indeed , in many or most of these cases , the apparent reason for the failure is that the canonical version of triz was introduced to science phd students in the same way that triz is taught to engineers in industrial companies .this may be a mistake , because science students are rightfully more critically minded and justifiably skeptical about overly prescriptive step - by - step methods .indeed , a critically thinking scientist would immediately question the canonical number of 40 inventive principles , and would also note that identifying just a pair of contradicting parameters is a first - order approximation , and so on. a more suitable approach to introduce triz to graduate students , which takes into account the lessons learnt by our predecessors , could be different . instead of teaching graduate students the ready - to - use methodology , it might be better to take them through the process of recreating parts of triz by analyzing various inventions and discoveries from scientific disciplines , showing that the triz inventive principles can be efficiently applied to science . in the process ,additional inventive principles that are more suitable for scientific disciplines could be found and added to standard triz , or the standard triz principles would be adjusted . in our recent textbooks , we call this extension `` accelerating science ( as ) triz '' , where `` accelerating '' refers not to charged particle accelerators , but instead highlights that triz can help to boost various areas of science . the textbooks and the methodology itself is now starting to been used in science courses in oxford , and intensively used in uspas courses - us particle accelerator school , and demonstrated a boost in creativity of students .the illustrations of inventive principles presented in the following pages were in fact created for the recent uspas-2016 course `` unifying physics of accelerators , lasers and plasma '' . in the following pageswe will present illustrations of inventive principles with brief commentaries .we have decided for this note to keep the canonical number of 40 inventive principles , however we in some cases slightly renamed some of them or adjusted their description .the continuous process of adjustments the definitions and creating new examples should continue , and we encourage the readers to proactively participate in this process , as this would be the most efficient way to learn and master the triz methodology .the inventive principle _ segmentation _ may involve dividing an object into independent parts , making an object easy to disassemble , or increasing the degree of fragmentation or segmentation .an example we selected to illustrate this principle is a multi - leaf steel collimator used in a beamline for particle therapy .this collimator is needed to shape the proton beam in such a way that it will correspond to the shape of the target ( cancer site ) .sometime , a solid personalized collimator used for these purposes , however it needs to be machined each time for specific case .an adjustable collimator , made from segmented steel leafs , make the treatment planning and delivery much more efficient .the inventive principle _ taking out _ may involve separating an interfering part or property from an object or singling out the only necessary part ( or ) of an object .an example we selected to illustrate this principle is a collimator of the beam halo intended to localize beam losses ( which represent an interfering property in this case ) in accelerators .the top picture above shows a beamline of an accelerator without a collimator . in this case particles from beam halo can be lost everywhere , creating problems associated with , for example , increased radiation in every location .inserting a collimator in one location ( bottom picture ) would eliminate losses everywhere except the collimator itself , which itself can be treated specially , e.g. additional radiation shielding can be installed around the collimator .therefore , by inserting the collimator we have separated the interfering property ( losses ) from the system .the inventive principle _ local quality _ may involve changing an object s structure from uniform to non - uniform , changing an external environment ( or external influence ) from uniform to non - uniform , making each part of an object function in conditions most suitable for its operation , or making each part of an object fulfill a different and useful function .an example we selected to illustrate this principle is a superconducting resonator cavity , where the bulk of the cavity is made from copper , however the inner surface is covered by niobium .while typically the superconducting cavities are made entirely from niobium , which is an expensive material , an arrangement as shown in the illustration would allow to considerably save on material cost of such cavities , provided , of course , that results of the ongoing studies will be successful .the inventive principle _ asymmetry _ may involve changing the shape of an object from symmetrical to asymmetrical , or , if an object is asymmetrical , increasing the degree of its asymmetry .we illustrate this principle via asymmetrical design of the dual - axis coupled cavities in the compact energy - recovery based linac shown above . in this linac an accelerated electron beam , after radiation generation , comes back to the decelerating part of the cavity , where the beam returns its energy to the system .in order to avoid instabilities of the beam which can be created in this system , all high order modes of the cavities need to be decoupled .this is achieved by introducing carefully designed asymmetry between every cell of the two cavities .the inventive principle _ merging _ may involve bringing closer together ( or merging ) identical or similar objects , assembling identical or similar parts to perform parallel operations , or making operations contiguous or parallel ; bringing them together in time .an example we selected to illustrate this principle is multi - channel pipettes and modular dispensers that are now indispensable for biological studies where many samples , many genes , or many variations of drugs need to be studied and analyzed in parallel .the inventive principle _ universality _ may involve making a part or object perform multiple functions or eliminating the need for other parts .an example we selected to illustrate this principle is the following peculiar design proposal for the beam dump of a linear collider .this beam dump needs to take and absorb , typically , 10 mw of cw power in the form of 250 - 500 gev electron or positron beam .this energy is mostly waisted and goes to heat .a suggestion was made that this beam could in fact be used to either feed a sub - critical reactor to generate electric power or perhaps to make a neutrino factory out of it .therefore , the beam dump of this design of a linear collider performs multiple functions and becomes universal .the inventive principle _ nested doll _ may involve placing one object inside another , placing each object , in turn , inside the other , making one part pass through a cavity in the other .an example we selected to illustrate this principle ( which is also called inventive principle of _ russian dolls _ ) is the construction of a high - energy physics detector , where many different sub - detectors are inserted into one another , to enhance the accuracy of detecting elusive particles .in standard triz this principle is called `` anti - weight '' , however for science applications it needs to be re - defined as `` anti - force '' , since gravity often plays negligible role on particles or nano- and micro - objects science deals with , while electromagnetic forces can be much more important .the inventive principle _ anti - force _ may involve compensating for the force on an object , merging it with other objects that provide compensating force , etc ., as explained in the figure below .we illustrate this principle with the heating system for plasma in tokamak , where accelerated beam heats plasma . to avoid beam sensing the field of solenoid or plasma , the beam is made of neutral atoms , obtained by stripping electrons from the initial beam of hydrogen negative ions .the inventive principle _ preliminary anti - action _ may involve replacing an action , which is known to produce both harmful and useful effects , with an anti - action to control those harmful effects .an example we selected to illustrate this principle is a final focus with local chromatic correction .any strong focusing optics suffer from chromatic aberrations , as shown on the left part of the picture .local chromatic correction involves dispersing the beam in energies _ prior _ it arrives to final lenses , and also inserting nonlinear magnet sextupole next to the final lens , which cancels the chromatic aberrations , thus acting against is . as a result of this preliminary anti - actionthe beam gets focused nicely into a tight spot as shown on the right picture .the inventive principle _ preliminary action _ may involve performing , before it is needed , the required change of an object ( either fully or partially ) , or pre - arranging objects such that they can come into action from the most convenient place and without losing time for their delivery .an example we selected to illustrate this principle is a crabbed collision . in a linear collider the electron and positron beams need to collide with a small crossing angle , as shown on the left side of the picture. however , their overlap during collision would then be incomplete and the luminosity would thus decrease . in order to preventthis loss , the beams , before collisions , can pass through a radio - frequency cavity , which would give to the beams a useful kick , in such a way that the head and tails of the beam receive kicks in different directions .the beams will therefore start to rotate and come to the collision point with a proper orientation , ensuring full overlap .the inventive principle _ beforehand cushioning _ may involve preparing emergency means beforehand to compensate for the relatively low reliability of an object .an example we selected to illustrate this principle is a bolus ( compensator ) for the proton therapy beamline .the thickness of the bolus is varied depending on location and therefore it will modify the energy of different parts of the proton beam , and correspondingly modify the penetration depth of the protons , matching the shape of the cancer site .the reader may argue that this example suit better the previous principle of preliminary action .if so , we would encourage the reader to suggest other examples , such as for example emergency kicker that would dump the beam in an accelerator in case of an accident , to prevent losses of the beam into precious superconducting magnets .the inventive principle _ equipotentiality _ may involve limiting position changes of an object in gravity or other potential field ( e.g. changing operating conditions to eliminate the need to raise or lower the objects in a gravity field ) .an example we selected to illustrate this principle is a laser cooling , where interaction of laser with atom ( excitation of the atom ) occurs only when they are `` at the same potential '' , i.e. when the velocity of the atom is such that due to doppler shift the laser frequency corresponds to the energy of the atom excitation .the inventive principle _ the other way round _ ( which can be also called principle of system and anti - system ) may involve inverting the action(s ) used to solve the problem or turning the object ( or process ) `` upside down '' .we illustrate this principle via consideration of the cloud and bubble chambers .triz textbooks often cite charles wilson s cloud chamber ( invented in 1911 ) and donald glaser s bubble chamber ( invented in 1952 ) as examples of this principle `` system and anti - system '' .indeed , the cloud chamber works on the principle of bubbles of liquid created in gas , whereas the bubble chamber uses bubbles of gas created in liquid .if the triz inventive principle of system / anti - system were applied , the invention of the bubble chamber would follow immediately and not almost half a century after the invention of the cloud chamber .the inventive principle _ spheroidality curvature _ may involve using , instead of rectilinear parts , surfaces , or forms , curvilinear ones , moving from flat surfaces to spherical ones , etc . an example we selected to illustrate this principle is cavity resonator pill - box style as shown on the left side of the pictures , and elliptical cavity ( where shapes are rounded and consist of various connected ellipses ) .a particular example shown on the right corresponds to the cavity which can produce crabbing kick mentioned in the principle 10 , but it can also be any other similar cavity . rounding the shapes of the resonator in such a wayallows to achieve better and smooth distribution of fields and currents along the walls of the cavity and correspondingly higher fields generated by the cavity on beam axis .the inventive principle _ dynamics _ may involve allowing ( or designing ) the characteristics of an object , external environment , or process to change to be optimal .an example we selected to illustrate this principle is a travelling focus idea intended to increase luminosity of linear colliders .the fields of the opposite beam during collision of e+ and e- beams can create an additional focusing which can help to squeeze the beams even tighter .however , for this additional focusing to work most optimally , the focal point for each beam needs to move during collision in such a way that it would coincide with the location of the head of the opposite beam .the location of the focal point is shown by arrows of corresponding color .such dynamic modification of the colliding beams would then give some increase of the luminosity .the inventive principle _ partial or excessive actions _ may involve , in case if 100% of the effect is hard to achieve with a given solution or method , using `` slightly less '' or `` slightly more '' of the same method , to make the problem considerably easier to solve . an example we selected to illustrate this principle is the design concept of a weak antisolenoid intended for compensating the beam x - y coupling effects in the interaction region of a linear collider .the anomalously large coupling effects arise due to overlap of the field of the main solenoid with the final focusing lenses .properly adjusted weak anti - solenoid can compensate a large fraction of these detrimental effects , making the problem much easier to solve with upstream coupling correctors .the inventive principle _ another dimension _ may involve moving into an additional dimension or using a multi - story arrangement of objects instead of a single - story arrangement , or tilting or re - orienting the object , laying it on its side , etc .an example we selected to illustrate this principle is something nature invented the dna packaging mechanism .the dna molecules , if strengthened out , are quite long , around 2 mm. however , they are packaged in a cell in a compact way . in general , there are five main different packaging levels .first , by going into another , transverse , direction , the dna molecules are packaged around histones . next multiples of these assemblies are packaged in a solenoid - like way .this then further packaged with multiple loops , and so on .in standard triz this method is called `` mechanical vibration '' , however for science applications this principle should better be re - defined as `` oscillations and resonances '' .the inventive principle _ oscillations and resonances _ may involve causing an object to oscillate or vibrate , increasing its frequency ( e.g. from microwave to optical ) , using an object s resonant frequency , etc .we illustrate this principle with the design concept of optical stochastic cooling which represents further evolution of stochastic cooling .both these methods are designed to decrease phase space volume of a beam of charged particle in accelerators .stochastic cooling relies on microwave range of frequencies , for detection of particles and acting on them , while optical stochastic cooling relies on , correspondingly , optical frequencies .the inventive principle _ periodic action _ may involve , instead of continuous action , using periodic or pulsating actions .we illustrate this principle via consideration of devices for generation of synchrotron radiation .this radiation is generated when relativistic charged particles move on a curved trajectory , loosing parts of its electromagnetic field .the simplest way to generate such radiation is to pass particles via a bending magnet , as shown on the left side of the picture .however , much better characteristic of radiation ( brightness , etc ) can be obtained if this process is repeated i.e. the particles are passed through a sequence of bends of different polarity .such arrangements of bends are called wigglers and undulators and are now widely used in synchrotron radiation light sources .the inventive principle _ continuity of useful action _ may involve carrying on work continuously , making all parts of an object work at full load , all the time , eliminating all idle or intermittent actions or work .we illustrate this principle via a concept of top - off injection for synchrotron light sources . in these sources radiationis generated by circulating electron beam which can decay due to losses , and thus the circulating current as well as the intensity of generated radiation decrease with time , as shown on the left side of the picture .the circulating beam is renewed seldom , when the new beam is injected .an alternative way to operate the light source is to arrange almost continuous injection so that the fresh portion of the beam would be injected very often , considerably increasing the efficiency of the light source and also eliminating any thermal effects associated with variations of circulating current or intensity of the emitted radiation .the inventive principle _ skipping _ may involve conducting a process , or certain stages of it ( e.g. destructible , harmful or hazardous operations ) at high speed .an example we selected to illustrate this principle is a gamma - jump technique used in accelerators . in proton circular accelerators in particular there is a notion of critical energy at this energy ( the relativistic factor corresponding to this energy is called ) the stable phase ( of the accelerating resonator ) flips to the other side of the sine wave .passing the critical energy during acceleration therefore requires jumping the phase of the resonator , to avoid beam getting lost. still , some disturbance of the beam unavoidably happens , due to such transition .the critical energy value depends on the properties of the focusing optics of the accelerator .it is possible , therefore , to program the optics change in such a way , that transition through the critical energy will happen much faster , significantly reducing the detrimental effects on the accelerated beam .the inventive principle _ blessing in disguise _ ( which is also called _ turning lemons into lemonade _ ) may involve using harmful factors ( particularly , harmful effects of the environment or surroundings ) to achieve a positive effect , eliminating the primary harmful action by adding it to another harmful action to resolve the problem , or amplifying a harmful factor to such a degree that it is no longer harmful .we illustrate this principle via consideration of wake - fields in linacs , which are normally harmful , as they can deteriorate the quality of accelerated beam , however in the case when this linac feeds a free electron laser , such wakes can turn into a useful factor , as they can produce an energy chirp along the beam which can serve to further compress the beam longitudinally , enhancing the generated radiation .the inventive principle _ feedback _ may involve introducing feedback of feed - forward links into the process ( referring back , cross - checking ) to improve the process or an action .an example we selected to illustrate this principle is a concept of stochastic cooling intended for reduction of phase space volume of the circulating in an accelerator beam .this is done via first detecting oscillations of particles in one location , sending the amplified signal which carries information about these oscillation along a shorter path than the particle takes to travel , and acting on the same particle with a kick that would decrease its oscillation .the inventive principle _ intermediary _ may involve using an intermediary carrier object or intermediary process , or merging one object temporarily with another ( which can be easily removed ) .an example we selected to illustrate this principle is a principle of operation of three - level laser .in such laser the atoms of the active media at first are located in ground state , and then , due to excitation by pump , get to the level l3 , which has quite short life - time , so the atoms quickly revert , via non - radiative decay , to level l2 , which has , in contrast , long life time .any stimulating emission then would result in an avalanche of stimulated coherent radiation .the level l3 , as we see in this case , plays the role of an intermediary .the inventive principle _ self service _ may involve making an object serve itself by performing auxiliary helpful functions , or using waste resources , energy , or substances .an example we selected to illustrate this principle is a design of an energy recovery based asymmetric dual - axis x - ray source , where the cavities are arranged in such a way as to perform an auxiliary helpful function of decelerating the beam which has already produced useful x - ray radiation , in order to recuperate its energy and thus allow more efficient operation of the system .the inventive principle _ copying _ may involve using , instead of an unavailable , expensive or fragile object , its simpler and inexpensive copies , or replacing an object , or process with optical copies .an example we selected to illustrate this principle is a concept of synchrotron light beam size monitor .the beam of charged particles , circulating in an accelerator , emits synchrotron radiation , when it is passing bending magnets .beam size monitors such as wires crossing the beams are invasive , and should be avoided .synchrotron radiation however allows to create non - destructive monitors , when this radiation is directed with mirrors to detector and its profile analyzed .therefore , instead of fragile beam which could be analyzed with a crossing wire , its optical copy is analyzed in this case to determine its beam size .the inventive principle _ cheap short - living objects _ may involve replacing an expensive object with a multiple of inexpensive objects , comprising certain qualities ( such as service life , for instance ) .an example we selected to illustrate this principle is a concept of laser plasma acceleration . in standard acceleration the accelerating wave is excited inside of metal resonators .the accelerating gradient is therefore limited by the properties of materials and typically can not exceed 100 mv / m . however, a short and powerful laser pulse can excite a wave in plasma , where accelerating gradient can be thousand times higher .this wave in plasma can be used to accelerate particles .the plasma and wave in it therefore serve in this case as a cheap and short - lived object .the inventive principle _ mechanics substitution _ may involve replacing mechanical means with electromagnetic or sensory ( acoustic , taste or smell ) means .we illustrate this principle via considerations of electrostatic generators of two kinds . in the first type shown on the left part of the picturethe electric charges are delivered to the metal sphere by the moving rubber belt .the charged are deposited on the belt by a system of sharp needles .this is van der graaf generator . in the second case a sequence of diode - based rectifiersis used to amplify the voltage to a high level , sufficient for accelerating of charged particles .this is called cockcroft - walton generator .the inventive principle _ pneumatics and hydraulics _ may involve using gas and liquid parts of an object instead of solid parts ( e.g. inflatable , filled with liquids , air cushion , hydrostatic , hydro - reactive ) .an example we selected to illustrate this principle is a design concept of a liquid beam target .beam targets are often needed , for example , for production of positrons or neutrons from accelerated electron or proton beams .considerable amount of heat is deposited in the targets during its interaction with the beam .preventing destruction of the target can be done by making it rotating , or , ultimately , already `` destroyed '' , i.e. , in this case , liquid .the inventive principle _ flexible shells and thin films _ may involve using flexible shells and thin films instead of three dimensional structures or isolating the object from the external environment using flexible shells and thin films .an example we selected to illustrate this principle is a concept of light sail laser - plasma ion acceleration .laser plasma acceleration of ions is usually achieved via interaction of powerful short laser pulse with solid target .this method , however , does not produce a nice mono - energetic accelerated beam . from the other side ,shining the laser onto a very thin film creates radiation pressure driven acceleration of the entire film , and correspondingly the ion beam accelerated in such a case will have much better properties .the inventive principle _ porous materials _ may involve making an object porous or adding porous elements ( inserts , coatings , etc . ) , and if an object is already porous , using the pores to introduce a useful substance or function .an example we selected to illustrate this principle is a concept of creating porous membranes using accelerated ion beams . in this case ,ion beams accelerated typically in cyclotrons are directed into thin films , leaving ionized traces inside .further chemical processing creates pores in such membranes , which can be then used for various filters or other applications .the inventive principle _ color changes _ may involve changing the color of an object or its external environment , changing the transparency of an object or its external environment , changing the emissivity properties of an object subject to radiant heating , etc .an example we selected to illustrate this principle is a principle of opcpa optical parametric chirped pulse amplification . in this methoda nonlinear crystal is used , which emits two different wavelengths when pumped with a single wavelength .this crystal can be used in an optical amplifier , when a pump laser beam and a signal laser beam are sent onto the crystal , and out from the crystal an amplified signal and depleted pump beams emerge . if the signal beam is chirped , the output amplified signal is also chirped .change of the color by the nonlinear crystal via optical parametric process is an illustration of the color changes inventive principle .the inventive principle _ homogeneity _ or expressing it in latin _ similia similibus curantur _ may involve making objects interacting with a given object of the same material ( or material with identical properties ) .an example we selected to illustrate this principle is a concept of electron cooling .this cooling method is aimed at decrease of the phase space volume of charged particle beam , for example the beam of anti - protons .the cooling is done by overlapping the anti - proton beam with the beam of electrons going in the same direction and with the same velocity .hot anti - protons , colliding with colder electrons , will transfer their energy to electrons , and after many passages through the electron beam will cool down .so , here we cure similar with similar cool charged particles with other type of charged particles .the inventive principle _ discarding and recovering _ may involve making portions of an object that have fulfilled their functions go away ( discard by dissolving , evaporating , etc . ) or modifying these directly during operation , or , conversely , restoring consumable parts of an object directly in operation .an example we selected to illustrate this principle is a semiconductor saturable absorber mirror so called sesam . such device is used in the system of short laser pulse generator and plays the role of a mirror , which , when the stored in the laser cavity light reaches certain intensity , `` disappears '' due to saturation effects , thus releasing all stored laser light out from the cavity in the form of a short intense laser pulse .the sesam is thus a mirror that is discarded at a proper moment .the inventive principle _ parameter changes _ may involve changing an object s physical state ( e.g. to a gas , liquid , or solid ) , changing the concentration or consistency , changing the degree of flexibility , changing the temperature , etc .we illustrate this principle via consideration of variation of the ratio of the surface area of an object to the volume of the object .maxwell or thermodynamic equations indicate that changing the volume to surface ratio can change characteristics of the object such as cooling rate or its electromagnetic field .we illustrate this via an example of a cat , who can change its surface area depending on the environmental temperature , to control its cooling rate , or via an example of fiber lasers , which , in comparison with lasers with standard geometry of active media , have much higher surface area , therefore better cooling , and correspondingly can have higher repetition rate and higher efficiency .the inventive principle _ phase transitions _ may involve using phenomena occurring during phase transitions ( e.g. volume changes , loss or absorption of heat , etc . ) .an example we selected to illustrate this principle is a phenomenon of superconductivity when the electrical resistance of certain materials can drop to zero when temperature decreases below the critical one .the inventive principle _ thermal or electrical expansion or property change _ may involve using thermal or electrical expansion ( or contraction ) or other property change of materials , or using multiple materials with different coefficients of thermal expansion ( property change ) .an example we selected to illustrate this principle is an electro - optic effect dependence of optical properties of objects such as absorption or refraction ( pockels effect ) on the applied electric field . in the example shown on this picture this effectis used to create ultra - short laser pulses .the inventive principle _ strong oxidants _ may involve replacing common air with oxygen - enriched air , replacing enriched air with pure oxygen , exposing air or oxygen to ionizing radiation , using ionized oxygen or replacing ozonized ( or ionized ) oxygen with ozone .an example we selected to illustrate this principle is a method for sterilization of food with low energy electron beam .packaged food is deposited on a conveyor in a factory and passes through rastered electron beam .the picture above shows how this is done at the sterilization factory in sioux city , iowa .the inventive principle _ inert atmosphere _ may involve replacing a normal environment with an inert one , or adding neutral parts , or inert additives to an object . an example we selected to illustrate this principle is a method of using sulfur hexafluoride ( sf6 or elegas ) , which is a colorless non - flammable gas with excellent electric insulating and arc - quenching capacity .this gas can be used , in particular , to fill the interior of electrostatic generators ( van der graaf or cockcroft - walton types ) in order to reach higher voltage and therefore higher energy of accelerated particles .the curves on the left side of the picture show a comparison of electrical discharge voltage versus the value of pressure multiplied by the gap between electrodes for sf6 in comparison with an air .the inventive principle _ composite materials _ may involve changing from uniform to composite ( multiple ) materials .an example we selected to illustrate this principle is a method of hardening of an artificial knee joint using ion implantation . in this case ion beam surface treatment results in a creation of a strong film on the surface of the artificial joint , equivalent to creating a composite material .suggested illustrations of inventive principles , we hope , will be useful to stimulate the readers to think about inventive ways to solve scientific problems they encounter in their research .the examples we suggested are often not ideal and could certainly be improved we welcome our readers to participate in creation of better examples . andmost importantly , we welcome the readers to use the methodology of inventive problem solving in their research , and beyond .
|
theory of inventive problem solving ( triz ) is a powerful tool widely used in engineering community . it is based on identification of a physical contradiction in a problem , and based on the corresponding pair of contradicting parameters selecting a few of suitable inventive principles , narrowing down the choice and leading to a much faster solution of a problem . it is remarkable that triz methodology can also be applied to scientific disciplines . many of triz inventive principles can be _ post factum _ identified in various in scientific inventions and discoveries . however , additional inventive principles , more suitable for scientific disciplines , should be introduced and added to standard triz , and some of the standard inventive principles need to be reformulated to be better applicable to science we call this extension accelerating science triz . in this short note we describe and illustrate the as - triz inventive principles via scientific examples , identifying as - triz inventive principles in discoveries and inventions originated from physics , biology , and other areas . this brief note , we believe , is yet one more step towards bringing triz methodology closer into the scientific community .
|
we are surrounded by waves , and they effect our daily life in a way that many of us are not aware .sound and light are our main tools for observing our immediate surroundings .light is for example responsible for you being able to read these lines , and more important , to get access to the vast majority of the knowledge accumulated in writings by man throughout centuries of intellectual activities .x - ray and ultra sound techniques have given tremendous contribution to the success of modern medicine .radio- and micro - waves are invaluable in modern communication technology including cellular phones and radio and tv broadcastings .understanding of quantum waves , and their behavior , constitutes the foundation of electronics and semiconductor technologies an essential ingredient in the past and future progress of computer hardware .the above list is not at all , or intended to be , complete .it could in fact easily been made much longer .however , the bottom line that we want to make here is that with the ubiquitous presence of wave phenomena in various applications , it is not surprising to find that wave phenomena have had , and still have , a prominent position in our studies of the physical world , and even today such phenomena are of out - most importance in science , medicine and technology .if you take an average introductionary text on wave phenomena , you will find discussions of how plane waves of constant frequency propagates in a homogeneous , isotropic medium .thereafter , the authors typically discuss the scattering and transmission of such waves at a _planar _ interface separating two semi - infinite media of different dielectric properties the fresnel formulae .these formulae serve to accurately describe the scattering of light from for example a mirror .however , from our everyday experience , we know that most surfaces are not mirror like , and naturally occurring objects are more complicated then two semi - infinite media .most naturally occurring surfaces are actually not smooth at all .they are , however , rough in some sense .in fact , all objects , man - made or not , _ have _ to be rough at atomic scales , but such small length scales are normally not resolved by our probes . it should be kept in mind that the characterization of a surface as rough or smooth is noticeably not unique , and it is not a intrinsic property of the surface . instead , however , it depends on the wavelength used to `` observe '' the surface .if the typical roughness is on a scale much smaller then the wavelength of the probe , this surface is considered as smooth . however , by reducing the wavelength of the light , the same surface might also be characterized as being rough .it is , among other factors , the surface topography and the wavelength of the probe , as we will see below , that together go into the characterization of a surface as being rough .let us from now on assume an electromagnetic probe , _i.e. _ light .if the surface can be considered as smooth , light is scattered ( coherently ) into the specular direction . as the roughness of the surfaceis increased so that the surface becomes weakly rough , a small fraction of the incident light will be scattered into other directions than the specular one .this non - specular scattering is called _ diffuse scattering _ or by some authors _ incoherent scattering_. as the roughness is increased even further , the diffuse ( incoherent ) component of the scattered light is increased on the expense of the specular component .when the surface roughness is so that the specular component can be more - or - less neglected as compared to the diffuse component , the surface is said to be strongly rough .this transmission from as smooth to a strongly rough surface is depict in figs .[ fig : planar_to_rough ] . due to the practical applications of waves , and the number of naturally occurring surfaces being rough , it is rather remarkable that it took several hundreds years from the birth of optics as a scientific discipline to someone started to considered wave scattering from rough surfaces .as far as we know today , the first such theoretical study was made at the end of the 19th century ( probably in the year of 1877 ) by one of the greatest scientists of its time , the british physicist lord rayleigh .he considered the scattering of light incident normally onto a sinusoidal surface . in 1913mandelshtam studied how light was scattered from liquid surfaces . by doing so, he became the first to consider scattering from _ randomly _ rough surfaces .this , as it turned out , should define the beginning of an active research area _ wave scattering from randomly rough surfaces _ which still today is an active field .however , it was first after the last world war that the research effort put into the field stated to accelerate .since that time , a massive body of research literature has been generated in the field .up to the mid 1980 s most of the theories used in this field were single scattering theories .however , from then on the main focus of the research has been on multiple scattering theories .in addition , advances in experimental techniques has lately enabled experimentalists to fabricate surfaces under well controlled conditions by using a holographic grating technique .this has opened up a unique possibility for direct comparison of theory and experiments in a way not possible a few decades ago . inspired by the works of lord rayleigh researchers developed a criterion the rayleigh criterion that could be used to determine when a given surface was to be considered as rough . hereboth the wavelength of the incident light as well as its angle of incidence are incorporated . to illustrate how this comes about ,let us consider a rough surface defined by . on this surfacewe pick two arbitrary points and .it could now be asked : what is the phase difference between two waves being scattered from these two points ?for simplicity we will here only consider the specular direction . under this assumptionit is straight forward to show that the phase difference is given by the following expression where is the modulus of the wave vector of the incident light of wavelength , and is the angle of incidence of the light as measured from the normal to the mean surface . from eq . we immediately observe that if the surface is planar , so that , the phase difference ( in the specular direction ) is always zero independent of the angle of incidence .however , if the surface is rough , in general .if , the two waves will be in , or almost in , phase and they will thus interfere constructively . on the other hand ,if , they will be ( more - or - less ) completely out off phase and as a result interfere destructively , and no , or almost no , energy will be scattered into the specular direction . in terms of the phase, a smooth surface would correspond to , and a rough one to .thus , might be considered as the borderline between a smooth and a rough surface ; if the surface is smooth , and otherwise ( ) it is rough .the criterion is the famous rayleigh criterion for a smooth surface .if the surface is randomly rough , it is practical to replace the height difference by a typical height fluctuation as provided , for example , by the rms - height , , of the surface .hence , the rayleigh criterion can be expressed as where is the so - called rayleigh parameter . from the rayleigh criterion , , it should be observed that in addition to the surface topography itself and the wavelength of the light , also its angle of incidence goes into determining if a surface is rough or not .this is probably the most important lesson to be learned today from the rayleigh criterion . the present review consists of basically two main parts one focus theoretical methods whilst the other one is devoted to rough surface scattering phenomenology . in the first part we try to present an overview of some of the main theories and methods used in the study of wave scattering from randomly rough surfaces .we start in sect .[ chapter : elmag ] by recapitulating the basic results of electromagnetic theory including maxwell s equations .this section serves among other things to define our notation .then we continue by describing how to characterize randomly rough surfaces ( sect .[ chap : characterization ] ) . sect .[ chap : theory ] is devoted to the quantities and main techniques used in the field of electromagnetic wave scattering from randomly rough surfaces .we here review classical theories like small amplitude perturbation theory , many - body perturbation theory as well as numerical simulation approaches . finally in sect .[ chapt : phenomena ] we discuss some of the phenomena that may occur when light is scattered from rough surfaces .such effects include the backscattering and satellite peaks phenomena ( weak localization ) , anderson localization , angular intensity correlation effects and nonlinear effects ( second harmonic generation ) .the present review mainly concern itself with rough surfaces and the scattering of electromagnetic wave from such . in this sectionwe therefor review some of the basic results of electromagnetic theory , including surface polaritons .the present section also serves to define our notation that we will use extensively in the following sections .the style of this review is kept quite brief , since all the material should be well known .a more thorough treatments can be found for example in the classical text on electrodynamics by j. d. jackson ..summary of the quantities contained in maxwell s equations , as well as their si - units . [ cols="^,<,<",options="header " , ] it should also be noticed , as was realized recently , that a measurement of the angular intensity correlations can provide valuable information regarding the statistical properties of the amplitude of the scattered field .in particular , it was shown that the short - range correlation function is in a sense a measure of the non - circularity of the complex gaussian statistics of the scattering matrix . if the random surface is such that only the and correlation functions are observed , then obeys complex gaussian statistics .if the random surface is such that only is observed , then obeys circular complex gaussian statistics and are said to be _ circular _ complex gaussian if and . ] .this can indeed be seen from fig .[ phen : corr : comp - sim]b , which shows that as the surface is made rougher , and therefore approaches a circular complex gaussian process , the -correlation vanishes as compared to . finally , if the random surface is such that , and are observed in addition to both and , then is not a gaussian random process at all .however , which kind of statistics satisfies in this case is not clear for the moment .these results fits the findings from standard speckle theory which assumes that the disorder is strong and that constitutes a circular complex gaussian process .so far in this section , we have discussed exclusively rough surface scattering phenomena that find their explanation within linear electromagnetic theory .there are still many exciting nonlinear surface scattering effects that have to be addressed in the future .such nonlinear studies are still at their early beginning .the studies that have been conducted so far on nonlinear surface scattering effects have mainly been related to the angular distribution of the scattered _ second harmonic _ generated light .in particular what have been studied are some new features in the backscattering directions of the second harmonic light . in this sectionwe will discuss some of these results .the presentation given below follows closely the one given in ref . .it is well - known from solid state physics that an ( infinite ) homogeneous and isotropic metal has inversion symmetry .a consequence of this is that there is no nonlinear polarization in the bulk . if , however , the metal is semi - infinite with an interface to vacuum , say , the inversion symmetry is broken .thus , a nonlinear polarization , different from zero , will exist close to the surface .as we move into the bulk of the metal , this effect will become smaller and smaller and finally vanish .therefore , one might talk about a nonlinear surface layer which through nonlinear interactions will give rise to light that is scattered away from the rough surface at the second harmonic frequency . the scattering system that we will be considering is the by now standard one depicted in fig . [ fig : theory : geometry ] .this geometry is illuminated from the vacuum side , , by a -polarized planar wave of ( fundamental ) frequency .only the -polarized component of the scattered second harmonic generated light will be considered here , even though there also will exit a weak -polarized component due to the nonlinear interaction at the surface .however , the -polarized component represents the main contribution to the scattered light at the second harmonic frequency , and will therefore be our main concern here. moreover it will be assumed that the generation of the second harmonic light does not influence the field at the fundamental frequency in any significant way . to motivate the study , we in figs .[ fig : phen : sh - experimental ] show some experimental results ( open circles ) due to k. a. odonnell and r. torre for the so - called normalized intensity of the second harmonic light scattered incoherently from a strongly rough silver surface .the surface was characterized by gaussian height statistics of rms - height and a gaussian correlation function .the transverse correlation length was .the wavelength of the incident light was , while the angles of incidence considered were , , and as indicated in fig .[ fig : phen : sh - experimental ] . for the scattering of -polarized light from a randomly rough silver surface .the surface was characterized by a gaussian height distribution of rms - height , as well as a gaussian correlation function of correlation length .the dielectric constants were at the fundamental and second harmonic frequency and respectively .the thick lines represent the results of numerical simulations and the open circles represent the experimental results of odonnell and torre .the incident plane wave had a wavelength .in the numerical simulations the surface had length and it was sampled with an interval .the numerical results were averaging over realizations of the surface , and the angles of incidence were ( a ) , ( b ) , and ( c ) .( after ref ..).,width=529,height=453 ] the most noticeable feature of the experimental results ( open circles ) shown in figs .[ fig : phen : sh - experimental ] are , without question , the dips seen in the backscattering direction .it should be recalled that for the linear problem one gets at this scattering angle an enhanced backscattering _( result not shown ) similar to the one shown _e.g _ in fig .[ fig : phen : bc - num - calc ] .so why do we have a dip for the second harmonic light at the backscattering direction , and not a peak ?below we will with the help of numerical simulations try to get a deeper understanding of what causes these dips .the nonlinear layer existing along the surface is of microscopic dimensions .since we are working with the macroscopic maxwell s equations it is natural to assume that this layer is infinitely thin . under this assumption ,the effect of the nonlinear boundary layer is accounted for in the boundary conditions to be satisfied by the field , and its normal derivative , at the second harmonic frequency .these boundary conditions have jumps at the nonlinear interface , and their degree of discontinuity depends on the nonlinear polarization , or equivalently , on the parameters that describes this polarization .the form of the ( nonlinear ) boundary conditions at the second harmonic frequency can be shown to be [ eq : phen : nonolinear - bc ] where the sources and have been defined in eqs . .as before , the superscripts denote the sources evaluated just above ( ) and below ( ) the rough surface defined by .the functions and are related to the nonlinear polarization through the integral of this quantity over the nonlinear boundary layer . to fully specify the nonlinearity of the problem, the polarization has to be specified .for instance for a free electron model , that we will consider here for simplicity , it takes on the form [ eq : phen : nonlin - pol ] here the constants and are defined as where is the electron number density , is a vector normal to the local surface at point , and and are the charge and mass of the electron respectively .the explicit expressions , in this model , for and can be found in ref . . since the surfaces used in the experiments leading to the results shown in figs .[ fig : phen : sh - experimental ] are strongly rough , perturbation theory does not apply , and one has in theoretical studies to resort to rigorous numerical calculations of the second harmonic scattered light .such kind of simulations are conducted on the basis of the rigorous simulation approach presented in sect .[ sect : theory : sect : numsim ] .the calculations are now , however , made out of two main steps : first , one calculates the ( linear ) source functions and ; the field and it s normal derivative evaluated on the surface at the fundamental frequency .this is done exactly as described in sect .[ sect : theory : sect : numsim ] . from the knowledge of the linear sources functions at the fundamental frequency ,the right - hand - side of the boundary conditions can be calculated since they depend directly on these source functions as well as on the form of the nonlinear polarization . in all numerical results to be presented later in this section the form for the nonlinear polarization given by eq . will be used . with the functions and available , the nonlinear sources , and , are readily calculated from an approach similar to the one described in detail in sect .[ sect : theory : sect : numsim ] .the only main difference is that now the boundary conditions to be used when coupling the two integral equations are the nonlinear boundary conditions given in eqs . . with the source functions both for the fundamental and second harmonic frequency available , all interesting quantities about the scattering process, both linear and nonlinear , are easily obtained .the full details of this approach can be found in ref . . based on this numerical approach ,we compare in figs .[ fig : phen : sh - experimental ] the numerical simulation results ( solid lines ) obtained by leyva - lucero _ et al . _ to the experimental results obtained by odonnell and torre ( open circles ) .the dielectric constants used in the simulations were at the fundamental frequency and at the second harmonic frequency .indeed by comparing the experimental and theoretical results shown in figs .[ fig : phen : sh - experimental ] , a nice correspondence is observed both qualitatively and quantitatively .particular in light of the oversimplified model used in the simulations for the nonlinear interaction , the agreement is no less then remarkable .from the experimental and theoretical results shown in figs .[ fig : phen : sh - experimental]a , a clear dip is seen in the incoherent component of the mean normalized second harmonic intensity for the backscattering direction . for the linear scattering problem, however , there is an enhancement at the same scattering angle .so what is the reason for the dip in the second harmonic light ?odonnell and torre , who conducted the experiments leading to the experimental results shown in figs .[ fig : phen : sh - experimental ] , suggested that these dips were due to coherent effects .in particular they suggested that the dips originated from destructive interference between waves scattered multiple times in the valleys of the strongly rough surface . since the numerical simulation approach seems to catch the main physics of the second harmonic generated light , it might therefore serve as a useful tool for testing the correctness of the suggestion made by of odonnell and torre .this can be done by applying a single scattering approximation to the generation of the second harmonic light . as described above , the numerical approach leading to the theoretical results shown as solid lines in figs .[ fig : phen : sh - experimental ] , consists mainly of a linear and nonlinear stage where each stage is basically solved by some variant of the approach given in sect .[ sect : theory : sect : numsim ] . by using a single scattering approach , like the kirchhoff approximation , to both stages of the calculation ,a single scattering approximation for the full problem is obtained .the single scattering processes included in such a calculation is illustrated in figs .[ fig : phen : sh - single - scattering - proc ] . notice that also unphysical scattering processes like the one shown in fig .[ fig : phen : sh - single - scattering - proc]b , are included in this approximation . as a function of the scattering angle calculated in a single scattering approximation .the remaining parameters of the simulation were the same as those used in obtaining the results shown in figs .[ fig : phen : sh - experimental ] .the angle of incidence was .( after ref ..),width=377,height=226 ] in fig .[ fig : phen : sh - single - scattering ] we present the consequence for the angular dependence of the normalized intensity of only including single scattering processes in the second harmonic generation . from this figureit is easily seen that the intensity of the second harmonic generated light calculated in a single scattering approximation does _ not _ give rise to a dip ( or peak ) for the backscattering direction .in fact the overall angular dependence of in the single scattering approximation is quite different from the one obtained by the rigorous approach described above .similar result holds for the other two angles incidence considered in figs .[ fig : phen : sh - experimental ] .hence , one may conclude that the dips present in the backscattering direction of the incoherent component for the mean second harmonic generated light is not due to single scattering .it therefore has to be a multiple scattering phenomenon . to look more closely into this , the authors of ref . used an iterative approach for the linear part of the scattering problem which enabled them to calculate the scattered fields according to the order of the scattering process .such a ( neumann - liouville ) iterative approach has been developed and used earlier in the literature .for the nonlinear part of the calculation the rigorous simulation approach was used and thus all higher order scattering processes were here taken into account .some of the processes accounted for by this procedure and which give rise to the second harmonic light is depicted in figs .[ fig : phen : sh - multiple - scattering - proc ] . , while the thick gray arrows represent light of frequency .( after ref ..).,width=529,height=415 ] we notice that the processes depicted in figs . [fig : phen : sh - multiple - scattering - proc]a and b represent single scattering in the linear part and are thus taken properly into account by using the standard kirchhoff approximation ( for the linear part ) . however , for the paths shown in figs .[ fig : phen : sh - multiple - scattering - proc]c and d , one needs to consider a pure double scattering approximation in order to include these processes properly . in figs .[ fig : phen : sh - num ] the simulation results for the second harmonic light are shown for the case where a single scattering ( fig .[ fig : phen : sh - num]a ) and a pure double scattering ( fig .[ fig : phen : sh - num]b ) approximation is used for the linear part of the scattering process . in both casesdips in the backscattering direction are observed . in order to obtain the solid curve of fig .[ fig : phen : sh - num]c _ both _ single and double scattering processes were taken into account for the linear part of the calculation .this result would therefore include any interference effect between paths like those show in figs .[ fig : phen : sh - multiple - scattering - proc]a e . the dashed line in fig . [fig : phen : sh - num]c is just the sum of the curves shown in figs .[ fig : phen : sh - num]a and b. it does therefore not contain any interference effects between type i paths ( figs .[ fig : phen : sh - multiple - scattering - proc]a b ) and type ii paths ( figs .[ fig : phen : sh - multiple - scattering - proc]c d ) . that the two curves shown in fig . [ fig : phen : sh - num]c are so close to each other tells us that the interference between type i and type ii paths are rather small ( if any ) .furthermore , paths of the type illustrated in fig .[ fig : phen : sh - multiple - scattering - proc]e do not seem to be important , and they do not have coherent partners . for the scattering of -polarized light from a random silver surface wherethe linear part of the problem was solved by iteration .the incident angle of the light was and the other parameters of the simulation were as in fig .[ fig : phen : sh - experimental ] .the curves have , ( a ) the single scattering contributions in the linear scattering and all contributions at the harmonic frequency , ( b ) pure double scattering contributions in the linear scattering and all contributions at the harmonic frequency , and ( c ) the single and double scattering contributions in the linear scattering and all contributions at the harmonic frequency . in ( c ) , the curve shown with the dashed line represents the sum of the curves shown in ( a ) and ( b ) .( after ref ..),width=529,height=453 ] the numerical results presented so far seem to indicate that the behavior in the backscattering direction is affected by interference between the paths of either type i or type ii . in the backscattering direction, there is no phase difference due to optical path difference between the two type i paths say .similar argument hold for the type ii paths .hence any phase difference between the two paths has to come from phase shifts during the reflection . in the linear multiple - scattering processes giving rise to enhanced backscattering the phase shifts dueto reflection will be the same for the two processes because the local fresnel coefficients are _ even _ functions of the angle of incidence .hence , the two paths in the backscattering direction will for the full linear problem both have the same phase and hence interferer constructively giving rise to the celebrated enhanced backscattering pack .however , for multiple scattering processes involving second harmonic generated light the situation is quite different .the reason for this is that the local nonlinear fresnel coefficient is not an even , but an _ odd _ function of the angle of incidence .hence , the phase difference between the two type i paths , say , will not be zero any more in general since the phases for these two paths will add instead of subtract . if this phase shift is positive in fig .[ fig : phen : sh - multiple - scattering - proc]a , say , then it will be negative for the path shown in fig .[ fig : phen : sh - multiple - scattering - proc]b since the local incident angles in the two cases have different signs and the local nonlinear fresnel reflection coefficient is an odd function of the incident angle .hence in the nonlinear case the phase difference in the backscattering direction is different from zero for the paths that seem to interfere . from the numerical results shown in this sectionthey in fact seem to be close do out of phase resulting in destructive interference , or a dip as compared to its background at the backscattering direction .so far in this section we have presented both experimental and numerical results for the second harmonic generated light scattered from strongly rough surfaces .there has also been conducted experiments for weakly rough surfaces .the results are quite similar to the experimental results presented in fig .[ fig : phen : sh - experimental ] . in particular , also for these weakly rough surfaces the second harmonic generated light scattered diffusely showed a dip in the backscattering direction .however , in theoretical studies both dips as well as peaks in the backscattering direction have been predicted . if it is a peak or dip depends on the values used for the nonlinear phenomenological constants . even though predicted theoretically , only dips have so far been seen in experiments . for weakly rough surface the scattering processes giving rise to these dips ( or peaks )are believed to be different for weakly and strongly rough surfaces .this situation resembles quite a bit the origin of the enhanced backscattering peak for weakly and strongly rough surfaces .indeed , for weakly rough surfaces the origin of the dip in the intensity of the diffusely scattered light at frequency is intimately related to the excitation of surface plasmon polaritons at this frequency .thus such dips are not to be expected for the second harmonic light generated in -polarization from weakly rough surfaces .we have in the introduction to this review tried to give some glimpses of the many multiple scattering phenomena that may take place when electromagnetic waves are scattered by a randomly rough surface separating two media of different dielectric properties .even though much is understood today when it comes to the rough surface scattering problem , there are still , after a century of research efforts , many questions that have not been addressed and answered properly .below we will therefore try to sketch out some directions for further research .we have exclusively considered one - dimensional surfaces .naturally occurring surfaces are mostly two - dimensional .thus , the advance most needed in the field are techniques , either numerical or analytical , that accurately and fast can handle electromagnetic wave scattering from two - dimensional surfaces of varying rms - height . two - dimensional weakly rough surfaces can be treated by perturbation theory , but if the surface is not weakly rough this approach is not adequate any more . in principle a general solution of the scattering problem can be formulated on the basis of a vector version of the extinction theorem . however , the resulting system of linear equations that needs to be solved in order to calculate the source functions is so big , and therefore require so much computer memory , that it for the moment is not practical in general .thus , one has to come up with new and more efficient methods for solving this kind of problems , or , the less appealing approach , to wait for advances in computer technology to make the extinction theorem approach tractable from a computational point of view .the scientific community dealing with wave scattering from disordered systems , seems to be divided into two separate groups : ( _ i _ ) those that deal with surface disordered systems and ( _ ii _ ) those that concentrate on systems with volume disorder . in the futurethese two `` groups '' have to be unified to a much higher degree then what is the case today in order to deal with scattering systems consisting of bulk disorder materials bounded by a random surface .strictly speaking there has already been published some works for such `` dual '' disordered systems , but still more work , and in a more general framework , need to be done for such problems .an area that needs to be addressed further in the future is the _ inverse scattering problem_ in contrast to the forward scattering problem that is the one that has received the most attention in theoretical studies of wave scattering from randomly rough surfaces . in the inverse scattering problem one has information , _e.g. _ from experiments , about the angular dependence of the scattered light and one is interested in trying to reconstruct the surface profile function or its statistical properties .this problem is quite difficult and huge research efforts have been spent on it in related fields like remote sensing and seismic in order to try to find its solution .so far a general solution to the problem has not been found .it is a pleasure to acknowledge numerous fruitful discussion with alex hansen , ola hunderi , jacques jupille , rmi lazzari , tamara a. leskova , alexei a. maradudin , eugenio r. mndez , stphane roux , and damien vandembroucq .this research was supported in part by the research council of norway ( contract no .32690/213 ) , norsk hydro asa , ntnu , total norge asa , centre national de la recherche scientifique ( cnrs ) , and the army research office ( daad19 - 99 - 1 - 0321 ) .in this appendix , some calculational details are presented for the matrix elements appearing in the matrix equations used to determine the source functions needed in the rigorous numerical simulation approach given in sect .[ sect : theory : sect : numsim ] . from this section , eqs . , we recall that these matrix elements are defined as [ app : matrix - elements ] where we in the last transition have made a change of variable and where the kernels , according to eqs . and , are given by [ app : kernels ] with and a similar expression holds for , and with . in the above expressions denote the free - space green s functions for the helmholtz equation . in 2-dimensions ,as we will be considering here , it can be written as where is the hankel function of the first kind and zeroth - order . by substituting this expression for the green s function into eqs . for the kernels , one gets [ app : kernel - via - hankel ] , \qquad \\\label{app : kernel - via - hankel - b } b_\pm(x_1|x_1 ' ; \omega ) & = & \lim_{\eta\rightarrow 0^+ } \left(- \frac{i}{4}\right ) h^{(1)}_0\!\!\left ( \chi_\pm(x_1|x_1 ' ) \right),\end{aligned}\ ] ] where we have defined notice that since the hankel functions are divergent for vanishing argument , so are the kernels and .however , fortunately these singularities are integrable , so the matrix elements and are in fact non - singular everywhere and in particular when .we will now show this and obtain explicit expressions for these matrix elements .we start by considering the off - diagonal elements where the kernels are non - singular . in this case , one may approximate the integrals in eqs . by for example the midpoint method with the result that ( ) where the expressions for the kernels are understood to be taken in the form eqs . .so now what about the diagonal elements where the kernels are singular ? in order to calculate these elements ,we start by noting that , needed in order to evaluate the matrix element , can be written as where we have taylor expanded and where we recall from eq . that .furthermore , by advantage of the following ( small argument ) asymptotic expansions for the hankel functions [ app : h - expansion ] where is the euler constant . with these expressions , it is now rather straight forward to derive , to obtain the matrix elements by integrating the resulting expressions term - by - term . to demonstrate this we start with the matrix element . with eqs . and and passing to limit whenever no singularities results and one gets { \nonumber}\\ & = & - \frac{i}{2 } \frac{\delta\xi}{2 } \left [ \frac{2i}{\pi } \left\ { \ln \left(\sqrt{\varepsilon_\pm}{\frac{\omega^{}}{c^{}}}\frac{\gamma(\xi_m)\delta\xi}{2e } \right ) + \gamma\right\ } + 1 + \ldots \right ] { \nonumber}\\ & \simeq & -\frac{i}{4 } \delta\xi \ , h^{(1)}_0\left ( \sqrt{\varepsilon_\pm } { \frac{\omega^{}}{c^ { } } } \frac{\gamma(\xi_m)\delta\xi}{2e } \right).\end{aligned}\ ] ] here in the last transition we have eq . one more .furthermore , for the leading term of the diagonal elements of one gets in a similar way from eqs . and { \nonumber}\\ & & \hspace{3.5cm}\times \left [ \eta+ \frac{1}{2 } \zeta''(\xi_m)u^2+\ldots \right ] { \nonumber}\\ & = & \lim_{\eta\rightarrow 0^+ } \frac{1}{2\pi } \int^{\frac{\delta \xi}{2\eta}}_{-\frac{\delta \xi}{2\eta } } du\ ; \frac{1}{\gamma^2(\xi_m)u^2 - 2\zeta'(\xi_m)u+1 } { \nonumber}\\ & & \mbox { } + \frac{1}{4\pi}\frac{\zeta''(\xi_m)}{\gamma^2(\xi_m ) } \int^{\frac{\delta \xi}{2}}_{-\frac{\delta \xi}{2 } } du { \nonumber}\\ & = & \frac{1}{2\pi}\lim_{\eta\rightarrow 0^+ } \left [ \tan^{-1}\left(-\zeta'(\xi_m)+\gamma(\xi_m)u\right ) \right]^{\frac{\delta \xi}{2\eta}}_{u=-\frac{\delta \xi}{2\eta } } + \delta\xi\frac{\zeta''(\xi_m)}{4\pi\gamma^2(\xi_m ) } { \nonumber}\\ & = & \frac{1}{2 } + \delta\xi\frac{\zeta''(\xi_m)}{4\pi\gamma^2(\xi_m)}\end{aligned}\ ] ] to sum up we have for the matrix elements and in these equations and are given by eqs .in this appendix some of the lengthy formulae found in small amplitude perturbation theory , sect .[ sect : theory : sapt ] , are given .in particular we here give the first few -functions found in eqs .we will now in the next two subsection explicitly give these functions for and -polarization .all explicit reference to the frequency has been suppressed .we have also for completeness used for the dielectric constant of the upper medium . in the case of vacuumthis constant is . + \varepsilon_{0}\alpha(k)\left[\alpha^2(q)-qk\right ] \right\ } \frac{2\alpha_0(k)}{\varepsilon_{}\alpha_0(k)+\varepsilon_{0}\alpha(q ) } { \nonumber}\\ & & \mbox { } + 2 \frac { \left(\varepsilon_{0}-\varepsilon_{}\right)^2 } { \varepsilon_{}\alpha_0(q)+\varepsilon_{0}\alpha(q ) } \frac { \alpha(q)\alpha_0(p_1)+qp_1 } { \varepsilon_{}\alpha_0(p_1)+\varepsilon_{0}\alpha(p_1 ) } \frac { 2\alpha_0(k ) \left [ \varepsilon_{0}\alpha(p_1)\alpha(k)-\varepsilon_{}p_1k \right ] } { \varepsilon_{}\alpha_0(k)+\varepsilon_{0}\alpha(k ) } \end{aligned}\ ] ] \left [ \varepsilon_{0}\alpha(q)\alpha(k)-\varepsilon_{}qk \right ] { \nonumber}\right .\\ & & \mbox { } \hspace{3 cm } \left .-2\varepsilon_{0}qk\alpha(q)\alpha(k ) \right\ } \frac { 2\alpha_0(k ) } { \varepsilon\alpha_0(k)+\varepsilon_{0}\alpha(k ) } { \nonumber}\\ & & \mbox { } - 3i \frac { \varepsilon_{0}-\varepsilon _ { } } { \varepsilon_{}\alpha_0(q)+\varepsilon_{0}\alpha(q ) } \left[\alpha(q)\alpha_0(p_1)+qp_1\right ] \frac { \varepsilon_{0}-\varepsilon _ { } } { \varepsilon_{}\alpha_0(p_1)+\varepsilon_{0}\alpha(p_1 ) } { \nonumber}\\ & & \mbox { } \qquad \times \left\ { \varepsilon_{}\alpha(p_1)\left[\alpha^2_0(k)-p_1k\right ] + \varepsilon_{0}\alpha(k)\left[\alpha^2(p_1)-p_1k\right ] \right\ } \frac{2\alpha_0(k)}{\varepsilon_{}\alpha_0(k)+\varepsilon_{0}\alpha(k ) } { \nonumber}\\ & & \mbox { } -i \left\ { 3 \frac { \varepsilon_{0}-\varepsilon _ { } } { \varepsilon_{}\alpha_0(q)+\varepsilon_{0}\alpha(q ) } \left[\alpha(q)\alpha_0(p_2)+qp_2\right ] \left[\alpha(q)-\alpha_0(p_2)\right ] { \nonumber}\right . \\ & & \mbox { } \qquad + 6 \frac { \varepsilon_{0}-\varepsilon _ { } } { \varepsilon_{}\alpha_0(q)+\varepsilon_{0}\alpha(q ) } \left[\alpha(q)\alpha_0(p_1)+qp_1\right ] { \nonumber}\\ & & \mbox { } \qquad \quad \times \left . \frac { \varepsilon_{0}-\varepsilon _ { } } { \varepsilon_{}\alpha_0(p_1)+\varepsilon_{0}\alpha(p_1 ) } \left[\alpha(p_1)\alpha_0(p_2)+p_1p_2\right ] \right\ } { \nonumber}\\ & & \mbox { } \quad \times \frac { \varepsilon_{0}-\varepsilon _ { } } { \varepsilon_{}\alpha_0(p_2)+\varepsilon_{0}\alpha(p_2 ) } \left[\varepsilon_0\alpha(p_2)\alpha(k)-\varepsilon p_1k\right ] \frac{2\alpha_0(k)}{\varepsilon_{}\alpha_0(k)+\varepsilon_{0}\alpha(k ) } , \end{aligned}\ ] ] { \nonumber}\right .\\ & & \mbox { } \hspace{3cm}\left .+ 3\left [ \alpha(q)-\alpha_0(p_1)\right ] + 2{\frac{\omega^{2}}{c^{2 } } } \frac { \varepsilon_{0}-\varepsilon _ { } } { \alpha_0(p_1)+\alpha(p_1 ) } { \frac{\omega^{2}}{c^{2 } } } \frac { \varepsilon_{0}-\varepsilon _ { } } { \alpha_0(p_2)+\alpha(p_2 ) } \right\ } \frac{2\alpha_0(k)}{\alpha_0(k)+\alpha(k ) } { \nonumber}\\ \end{aligned}\ ] ] t. a. leskova , a. a. maradudin , m. levay - lucero , and e. r. mndez , _ multiple - scattering effects in the second harmonic generation of light in reflection from a randomly rough metal surface _( to appear ann .
|
no surface is perfectly planar at all scales . the notion of flatness of a surface therefore depends on the size of the probe used to observe it . as a consequence rough interfaces are abundant in nature . here the old , but still active field of rough surface scattering of electromagnetic waves is addressed . this topic has implications and practical applications in fields as diverse as observational astronomy and the electronics industry . this article reviews the theoretical and computational foundation and methods used in the study of rough surface scattering . furthermore , it presents and explains the physical origin of a series of multiple scattering surface phenomena . in particular what is discussed are : the enhanced backscattering and satellite peak phenomena , coherent effects in angular intensity correlation functions and second harmonic generated light ( a non - linear effect ) .
|
one of the reasoning models that is more useful to represent real situations is fuzzy reasoning .indeed , world information is not represented in a crisp way .its representation is imperfect , fuzzy , etc ., so that the management of uncertainty is very important in knowledge representation .there are multiple frameworks for incorporating uncertainty in logic programming : * fuzzy set theory , * probability theory , * multi - valued logic , * possibilistic logic despite of the multitude of theoretical approaches to this issue , few of them resulted in actual practically usable tools . since logic programming is traditionally used in knowledge representation and reasoning ,we argue it is perfectly well - suited to implement a fuzzy reasoning tool as ours .the result of introducing fuzzy logic into logic programming has been the development of several fuzzy systems over prolog .these systems replace the inference mechanism of prolog , sld - resolution , with a fuzzy variant that is able to handle partial truth .most of these systems implement the fuzzy resolution introduced by lee in , as the prolog - elf system , the fril prolog system and the f - prolog language . however , there is no common method for fuzzifying prolog , as has been noted in .some of these fuzzy prolog systems only consider the predicates fuzziness whereas other systems consider fuzzy facts or fuzzy rules .there is no agreement about which fuzzy logic should be used .most of them use min - max logic ( for modelling the conjunction and disjunction operations ) but other systems just use lukasiewicz logic .furthermore , logic programming is considered a useful tool for implementing methods for reasoning with uncertainty in .there is also an extension of constraint logic programming , which can model logics based on semiring structures .this framework can model min - max fuzzy logic , which is the only logic with semiring structure .another theoretical model for fuzzy logic programming without negation has been proposed by vojtas in , which deals with many - valued implications .one of the most promising fuzzy tools for prolog was the `` fuzzy prolog '' system .the most important advantages against the other approaches are : 1. a truth value will be a finite union of sub - intervals on $ ] .an interval is a particular case of union of one element , and a unique truth value is a particular case of having an interval with only one element .a truth value is propagated through the rules by means of an _aggregation operator_. the definition of this _ aggregation operator _ is general and it subsumes conjunctive operators ( triangular norms like min , prod , etc . ) , disjunctive operators ( triangular co - norms , like max , sum , etc . ) , average operators ( averages as arithmetic average , quasi - linear average , etc ) and hybrid operators ( combinations of the above operators ) .crisp and fuzzy reasoning are consistently combined .fuzzy prolog adds fuzziness to a prolog compiler using clp( ) instead of implementing a new fuzzy resolution , as other former fuzzy prologs do .it represents intervals as constraints over real numbers and _ aggregation operators _ as operations with these constraints , so it uses prolog s built - in inference mechanism to handle the concept of partial truth . over the last few yearsseveral papers have been published by medina et al .( ) about multi - adjoint programming , which describe a theoretical model , but no means of serious implementations apart from promising prototypes . indeed , that was the reason for developing fuzzy prolog .fuzzy prolog is a very expressive tool which allows the user to program almost everything , but we have to pay for this expressiveness .the cost is a complex syntax difficult to understand .the motivation for developing rfuzzy is mainly focused on reducing this complex syntax .this reduction is based on three ideas : 1 .use real numbers instead of intervals between real numbers to represent truth values .fuzzy prolog answers to user queries are intervals like + _it_will_rain ( tonight , [ 0 , 1])_. this is a bit difficult to understand by normal users .truth value of this example is between 0 and 1 , and this means that program can not conclude anything about the predicate truth value .2 . whenever it is possible , do not answer user queries using constraints .a fuzzy prolog answer to an user query can be a constraint , like + _it_will_rain ( tomorrow , [ x , y ] ) , x 0 , x 1 , y 0 , y 1_. the meaning of this example is exactly the same as the meaning of the previous one , but it is slightly more difficult to understand it .truth value variables do not need to be coded .taking care of variables to manage the predicates truth value introduces errors and makes the code illegible , without giving us any advantage .rfuzzy uses real numbers to represent truth values and its replies are never constraints . besides , it is able to distinguish between crisp and fuzzy predicates and it manages the introduction of truth value variables , so the user does not need to take care of them .truth variables are always introduced at the end of the predicate arguments list , so it can be seen as some syntactic sugar .we explain this in subsection [ doing - queries - with - truth - values ] . from the point of view of expressiveness, we can remark that rfuzzy offers to the user the ability to define types , general and conditioned default values and truth value representations by means of facts , functions or rules .besides , it implements multi - adjoint logic with representation of the concept of credibility of the rules , so it is one of the first tools that are actually modelling multi - adjoint logic .in this section we are going to describe rfuzzy s syntax .rfuzzy defines the syntax of a new subset of prolog predicates to work with truth values and to assign credibility to rules .the extensions that we have added to provide fuzziness of predicates are : type information , truth values ( for facts , functions and rules ) and default truth values .rfuzzy shares with fuzzy prolog most of its nice expressive characteristics : prolog - like syntax ( based on using facts and clauses ) , use of any aggregation operator , flexibility of query syntax , constructivity of the answers , etc . nevertheless , rfuzzy is simpler than fuzzy prolog for the user because the truth values are simple real numbers instead of the general structures of fuzzy prolog .prolog does not have types .prolog code are formulas and at execution time it looks for all of them to be true .to do that , it generates a herbrand universe and tries to substitute every variable with a herbrand term . as we do not want programs to look for an answer infinitely, we offer the user a facility to restrict the set of possible solutions .this extension is named `` types '' and its syntax is shown in ( [ eq - def - type ] ) .\label{eq - def - type}\ ] ] where _set_prop _ is a reserved word , _ pred _ is the name of the typed predicate , _ ar _ is its arity and _ type_pred_\{n } _ is the predicate used to assure that the value given to the argument in the position _n _ of a call to _ pred / ar _ is correctly typed .type_pred_\{n } _ must have arity 1 .the example below shows that the two arguments of the predicate _ has_lower_price/2 _ have to be of type _car/1 _ and which individuals belong to that type ..... : - set_prop has_lower_price/2 = > car/1 , car/1 .car(vw_caddy ) .car(alfa_romeo_gt ) . car(aston_martin_bulldog ) .car(lamborghini_urraco ) . .... fuzzy facts are facts to which we assign a truth value . to code them in programswe offer a special syntax , so prolog can distinguish between normal facts and fuzzy facts .this syntax is shown in ( [ eq - def - fact ] ) . arguments ( _ args _ ) should be ground and the truth value ( _ truth_val _ ) must be a real number between 0 and 1 .the example below defines that the car __ is an _ expensive_car _ with a truth value 0.6 . ....expensive_car(alfa_romeo_gt ) value 0.6 . .... fact truth value definition ( see subsection [ fact - truth - value ] ) is worth for a finite ( and relative small ) number of individuals . as we may want to define a big amount of individuals , we need more than this . fuzzy truth values are usually represented using continuous functions .[ fig : teenager_credibility ] shows an example in which the truth value function assigns the truth value of being _ teenager _ from the person s age value .a function can be defined in several ways , but the easiest one is via a sequence of ordered pairs whose first element is the fact and the second element is the truth value assigned to that fact . functions used to define the truth value of some group of facts are usually continuous and linear over intervals . to define those functionsthere is no necessity to write down the value assigned to each element in their domains .a better way to define them is by means of their inflexion points , so function values for the elements between the inflexion points are determined by means of interpolation .rfuzzy provides the syntax for defining functions by stretches .this syntax is shown in ( [ eq - def - function ] ) .external brackets represent the prolog list symbols and internal brackets represent cardinality in the formula notation ._ arg1 , ... , argn _ should be ground terms ( numbers ) and _ truth_val1 , ... , truth_valn_ should be border truth values .the truth value of the rest of the elements ( apart from the border elements ) is obtained by interpolation . ^ * ] ) \ .\label{eq - def - function}\ ] ] .... : - set_prop teenager/1 = > people_age/1 . : - default(teenager/1 , 0 ) .teenager : # ( [ ( 9 , 0 ) , ( 10 , 1 ) , ( 19 , 1 ) , ( 20 , 0 ) ] ) . .... a tool which only allows the user to define truth values through functions and facts leaks on allowing him to combine those truth values for representing more complex situations . a rule is the perfect tool to combine the truth values of facts , functions , and other rules .rules allow the user to combine truth values in the correct way ( by means of aggregation operators , like _ maximum _ or _ product _ ) .besides this combination truth value for the body of the rule , the rule can be given an overall credibility truth value .credibility is used to express how much we trust the rule we write .suppose a small weather example in which we have the rule _ it will rain if it is cloudy and it is hot_. as it might rain but it might not , we can assign the rule a credibility of 0.7 .as expected , the truth value obtained from the body is combined with the credibility value of the rule to obtain a final truth value ._ rfuzzy _ offers the user a concrete syntax to define combinations of truth values by means of aggregation operations , and assign to that rules a credibility .this syntax extension is defined in ( [ eq - def - predicate ] ) .indeed , the user can choose two aggregation operators : _op2 _ for combining the truth values of the subgoals of the rule s body and _ op1 _ for combining the previous result with the credibility of the rule .^*\ ) \ [ \ \textbf{cred\ ( } op1 , value1\textbf{)}\ ] ^{0,1 } \textbf { : } \thicksim op2~ pred1(arg1\ [ , \ arg2]^*\ ) \nonumber \\ \ [ , \ pred2(arg1\ [ , \ arg2]^*\ ) ] \ .\\label{eq - def - predicate}\end{aligned}\ ] ] the following examples show its usage .the second one uses the operator _prod _ for aggregating truth values of the subgoals of the body and the operator _min _ to aggregate the result with the credibility of the rule , 0.8 . as can be deduced from syntax and examples , _ cred _ and _ : _are reserved words . ....tempting_restaurant(r ) : ~ prod low_distance(r ) , cheap(r ) , traditional(r ) . .... .... good_player(j ) cred ( min,0.8 ) : ~ prod swift(j ) , tall(j ) , experience(j ) . ....unfortunately , information provided by the user is not complete in general .so there are many cases in which we have no information about the truth value of an individual or a set of them . nevertheless , it is interesting not to stop a complex query evaluation just because we have no information about one or more subgoals if we can use a reasonable approximation .the solution to this problem is using default truth values for these cases .the rfuzzy extension to define a default truth value for a predicate when applied to individuals for which the user has not defined an explicit truth value is named _ general default truth value_. _ conditioned default truth value _ is used when the default truth value only applies to a subset of the function s domain .this subset is defined by a membership predicate which is true only when an individual belongs to the subset .the membership predicate ( _ membership_predicate / ar _ ) and the predicate to which it is applied ( _ pred / ar _ ) need to be have the same arity ( _ ar _ ) . if not , an error message will be shown at compilation time .the syntax for defining a general default truth value is shown in ( [ eq - def - defaults ] ) , and the syntax for assigning a conditioned default truth value is shown in ( [ eq - def - default - with - conditions ] ) ._ pred / ar _ is in both cases the predicate to which we are defining default values .as expected , when defining the three cases ( explicit , conditioned and default truth value ) only one will be given back when doing a query .the precedence when looking for the truth value goes from the most concrete to the least one . the example below shows how to assign a default truth value of 0.5 to all cars that do not have an explicit truth value nor have a default conditioned truth value .besides , it shows how to assign a conditioned default truth value to all cars belonging to a small subset and not having an explicit truth value .this subset is determined by the membership predicate _expensive_type/1 _ , and default truth value for its elements is 0.9 .lamborghini_urraco _ is an _ expensive_car _ with truth value 0.9 but _vw_caddy _ is an _expensive_car _ with truth value 0.5 .both values are default approximations because we have no direct declaration ( as for _ alfa_romeo_gt _ that is an _expensive_car _ with a truth value 0.6 as we show above ) ..... : - set_prop expensive_car/1 = > car/1 .: - default(expensive_car/1 , 0.9 ) = >expensive_type/1 . : - default(expensive_car/1 , 0.5 ) . expensive_type(lamborghini_urraco ) .expensive_type(aston_martin_bulldog ) . ....indeed the program has to be run . when compiling , _rfuzzy _ adds a new argument to the arguments list of each fuzzy predicate .this argument serves for querying about the predicate truth value .it can be seen as syntactic sugar , as truth value is not part of the predicate arguments , but metadata information .truth value argument is added to the predicates in a uniform way : it is always a new argument at the end of the arguments list of the predicate . in the previous example we wrote _, so to query the system we have to give the predicate two arguments instead of only one where the second one will represent the query s truth value. this can be seen in the first example of subsection [ constructive - answers ] .a fuzzy tool should be able to provide constructive answers for queries .the regular ( easy ) questions are asking for the truth value of an element .for example , how expensive is an _alfa_romeo_gt _ ?- expensive_car(alfa_romeo_gt , v ) .v = 0.6 ? ; no .... but the really interesting queries are the ones that ask for values that satisfy constraints over the truth value .for example , which cars are very expensive ? .rfuzzy provides this constructive functionality . .... ?- expensive_car(x , v ) , v > 0.8 .v = 0.9 , x = aston_martin_bulldog ? ; v = 0.9 , x = lamborghini_urraco ? ; no .... the rfuzzy package implements a meta - translation of the rfuzzy syntax to iso prolog , via clp(r ) , this is the reason for its constructivity .rfuzzy is mainly suitable for expert systems applications . as mentioned before ,its main advantages in comparison to fuzzy prolog are its simpler syntax , the use of real numbers instead of intervals between them and the implicit handling of truth values .besides , it presents facts truth values ( explicit , default or conditioned default truth value ) , functions truth values and rules ( with or without credibility ) which simplifies the user development process a lot . although a medical expert system development were the best example of using rfuzzy , due to lack of space we prefer to show here one in which we decide which is the best restaurant for going out . in the examplewe can see that we know the distance to all the restaurants in a crisp way .this crisp value is translated by means of _ low_distance _ and _ low_distance_aux _ into a fuzzy one which is used into _ tempting_restaurant _ to determine its truth value .this allows us to ask which is the truth value of each tempting restaurant , which restaurant is a tempting restaurant with a truth value of , for example , 0.7 or list all tempting restaurants with their truth values .the rfuzzy module with installation instructions and examples can be downloaded from http://babel.ls.fi.upm.es / software / rfuzzy/.rfuzzy is a logic programming language that is able to model all the extensions that are described in section [ rfuzzy : syntax ] .it is implemented as a ciao prolog package because ciao prolog offers the possibility of dealing with a higher order compilation through the implementation of ciao packages .those packages serve as input for the _ `` ciao system preprocessor '' _ ( ciaopp ) , a tool able to perform source - to - source transformations .the reason beyond the implementation of _ rfuzzy _ as a ciao prolog package is that the resultant code has to deal with two kinds of queries : * queries in which the user asks for the truth value of an individual , and * queries in which the user asks for an individual with a concrete truth value .as can be seen in the following example , this is not an easy task . .... ? - a is 1 , b is 2 , c is a + b. a = 1 , b = 2 , c = 3 ? .- c is 3 , c is a + b. { error : illegal arithmetic expression } { error : illegal arithmetic expression } no? - .... formula _c is a + b _ only works if variables a and b are bound .almost all predicates that are problematic with non - bound variables have inside comparisons and/or assignments .this aims us to translate rfuzzy programs into clp( ) programs .clp( ) is a ciao prolog package which translates real number operations into constraints applied to the variables involved in those operations . taking advantage of rfuzzy and clp( ) transformations, our tool compiles rfuzzy programs into iso prolog programs , so the interpreter is able to work with them as it normally does . as a result ,the global compilation process has two preprocessor steps : ( 1 ) the rfuzzy program is translated into clp( ) constraints by means of the rfuzzy package and ( 2 ) those constraints are translated into iso prolog by using the clp( ) package .[ fig : rfuzzy_architecture ] shows the whole process . in the following example the predicate _ tempting_restaurant _is translated from rfuzzy syntax into iso prolog syntax . in the first step , the rfuzzy package inserts truth value variables , the _ inject _ metapredicate call ( one of its arguments is the aggregation operator to be used , _ prod _ ) and inserts rfuzzy comparisons to take care at execution time that the rule s truth value is always between zero and one . in the second step , clp( )converts comparisons into constraints ( via predicate calls ) ..... % rfuzzy program tempting_restaurant(r ) : ~ prod low_distance(r ) , cheap(r ) , traditional(r ) ..... .... % clp(r ) program rfuzzy_rule_tempting_restaurant(r,_1 ) : - low_distance(r,_2 ) , cheap(r,_3 ) , traditional(r,_4 ) , inject([_2,_3,_4],prod,_1 ) , _ 1 .>=. 0 , _ 1 .=<. 1 ..... .... % iso prolog program rfuzzy_rule_tempting_restaurant(r,_1 ) : - low_distance(r,_2 ) , cheap(r,_3 ) , traditional(r,_4 ) , inject([_2,_3,_4],prod,_1 ) , solve_generic_1(le,0,_1,-1 ) , solve_generic_1(le,-1,_1,1 ) . .... internally , rfuzzy package unifies and translates all the information given by the user to each predicate ( types , default values with and without condition , truth values defined in facts and rules with and without credibility ) into a single predicate body. a simplified version of the skeleton used for that predicate is shown below . _rfuzzy package simplified skeleton _ .... main : - types , ( normal ; default ) normal : - ( fact ; ( \+(fact_aux ) , function ) ; ( \+(fact_aux ) , \+(function_aux ) , rule ) ) default : - \+(fact_aux ) , \+(function_aux ) , \+(rule_aux ) ,( cl_with_cond ; ( \+(cl_with_cond_aux ) , cl_with_no_cond ) ) .... the skeleton has three different parts : the one which takes care of allowing only queries or answers with the expected individuals , the one which looks for a concrete truth value ( it can be defined by means of a fact , a function or a rule ) and the one which looks for a default truth value ( conditioned or not ) .predicates ending in _ aux do not take care on the truth value argument .the first part is obtained from the type definitions ( see [ type - definition ] ) , translating all types into a predicate which is called at first ( types ) so we assure we only work with the expected individuals .the second part looks for a concrete value whose arguments have to unify with the parameters the user has given .precedence when looking for it is : 1 . a fact ( see subsection [ fact - truth - value ] ) 2 .a function ( see subsection [ functional - truth - value ] ) 3 . a rule ( see subsection [ rule - truth - value ] )the third part is only called when the second one ( searching for a truth value ) fails , and looks for a conditioned or default truth value ._ rfuzzy _ offers to the users a new framework to represent fuzzy problems over real numbers . _ fuzzy prolog _ is an existing framework for dealing with fuzzy problems representation .advantages over _ fuzzy prolog _ are a simpler syntax and the elimination of answers with constraints , and _is one of the first tools modelling multi - adjoint logic , as explained in subsection [ subsec : motivation ] ._ rfuzzy _ syntax is simpler that _ fuzzy prolog _ syntax .its fuzzy values are simple real numbers instead of intervals between real numbers , and it hides the management of truth value variables . as normal fuzzy problems do not use intervals to represent fuzziness and do not need to code an uncommon behaviour of fuzzy variables , this syntax reduction is an advantage .programs written in _ rfuzzy _ syntax are more legible and more easy to understand than _ fuzzy prolog _ programs ._ fuzzy prolog _ answers to user queries are difficult to understand due to the existence of constraints . as normal replies to final users are ground terms , the programmer has to code by hand how to reach them . to eliminate those constraints andanswer queries with ground terms the programmer tries to substitute variables with ground terms until one makes true all of them ._ rfuzzy _ offers a powerful tool to deal with this task : _ type definition_. _ type definition _ ( see subsection [ type - definition ] ) allows the user to define which terms are suitable for being substituted into a variable , so she does not have to code this behaviour again .besides , the elimination of answers with constraints provides more human readable answers and more easy to test programs ( because answers we test do not have constraints , just ground terms ) .there is also an extension to introduce default truth values . as world information is sometimes incomplete , _ rfuzzy _ offers to the user the possibility to define default truth values and default conditioned truth values ( see subsection [ general - and - conditioned - default - truth - values ] ) .this allows us to make inference with default truth values when we do not know anything about the truth of some fact .extensions added to _ prolog _ by _ rfuzzy _ are : types , default truth values ( conditioned or not ) , assignment of truth values to individuals by means of facts , functions or rules , and assignment of credibility to the rules .there are countless applications and research lines which can benefit from the advantages of using the fuzzy representations offered by rfuzzy .some examples are : search engines , knowledge extraction ( from databases , ontologies , etc . ) , semantic web , business rules , and coding rules ( where the violation of one rule can be given a truth value ) .current work on rfuzzy tries to apply constructive negation to the engine .rfuzzy needs to define types in a constructive way ( by means of predicates that are able to generate all their individuals by backtracking ) so we can not use constraints .future research will be done in this line for widening the definition of types .abietar , p.j .morcillo , and g. moreno . designing a software tool for fuzzy logic programming . in t.e .simos and g. maroulis , editors , _ proc . of the int .conf . of computational methods in sciences and engineering .iccmse07 _ ,volume 2 of _ computation in mordern science and engineering _ , pages 11171120 .american institute of physics , 2007 .distributed by springer .f. bueno , p. lpez - garca , g. puebla , and m. hermenegildo .he ciao prolog preprocessor .technical report clip8/95.0.7.20 , technical university of madrid ( upm ) , facultad de informtica , 28660 boadilla del monte , madrid , spain , 1999 .s. munoz - hernandez , c. vaucheret , and s. guadarrama .combining crisp and fuzzy logic in a prolog compiler . in j. j. moreno - navarro and j. mario , editors , _ joint conf . on declarative programming : appia - gulp - prode 2002 _ , pages 2338 , madrid , spain , september 2002 . c. vaucheret , s. guadarrama , and s. munoz - hernandez .fuzzy prolog : a simple general implementation using clp(r ) . in m.baaz and a. voronkov , editors , _ logic for programming , artificial intelligence , and reasoning , lpar 2002 _ , number 2514 in lnai , pages 450463 , tbilisi , georgia , october 2002 .springer - verlag . c. vaucheret , s. guadarrama , and s. munoz - hernandez .fuzzy prolog : a simple general implementation using clp(r ) . in p.j .stuckey , editor , _ int .conf . in logic programming ,iclp 2002 _ , number 2401 in lncs , page 469 , copenhagen , denmark , july / august 2002 .springer - verlag .
|
fuzzy reasoning is a very productive research field that during the last years has provided a number of theoretical approaches and practical implementation prototypes . nevertheless , the classical implementations , like fril , are not adapted to the latest formal approaches , like multi - adjoint logic semantics . some promising implementations , like fuzzy prolog , are so general that the regular user / programmer does not feel comfortable because either representation of fuzzy concepts is complex or the results difficult to interpret . in this paper we present a modern framework , _ rfuzzy _ , that is modelling multi - adjoint logic . it provides some extensions as default values ( to represent missing information , even partial default values ) and typed variables . rfuzzy represents the truth value of predicates through facts , rules and functions . rfuzzy answers queries with direct results ( instead of constraints ) and it is easy to use for any person that wants to represent a problem using fuzzy reasoning in a simple way ( by using the classical representation with real numbers ) . reasoning , implementation tool , fuzzy logic , multi - adjoint logic , logic programming application
|
quantum channel discrimination is a natural extension of a basic problem in quantum hypothesis testing , that of distinguishing between the possible states of a quantum system . in the case of binary state discrimination, it is given _ a priori _ that a quantum system is in one of two states or , and the goal is to identify in which state it is by performing a quantum measurement .we say that is the null hypothesis and is the alternative hypothesis .a natural extension of this problem occurs in the independent and identically distributed ( i.i.d . ) setting . here, the discriminator is provided with quantum systems in the state or , and the task is to apply a binary measurement on these systems , with , to determine which state he possesses .one is then concerned with two kinds of error probabilities: the probability of incorrectly rejecting the null hypothesis , the type i error , and the probability of incorrectly rejecting the alternative hypothesis , the type ii error . of course , it is generally impossible to find a quantum measurement such that both of these errors are equal to zero simultaneously , so one instead studies the asymptotic behaviour of and as , expecting there to be a trade - off between minimising and minimising .in asymmetric hypothesis testing , one fixes a constraint on the type i error , say , and then seeks to minimise the type ii error .when a constant threshold is imposed on the type i error , the optimal type ii error is given by the central result in the asymptotic setting is the quantum stein s lemma , due to hiai and petz and ogawa and nagaoka . the direct part of the lemma states that for any constant bound on the type i error , there exists a sequence of measurements that meets this constraint and is such that the type ii error decreases to zero exponentially fast with a decay exponent given by the quantum relative entropy , defined as {cc}\text{tr}\left\ { \rho\left [ \log\rho-\log\sigma\right ] \right\ } & \text{if supp}\left ( \rho\right ) \subseteq\text{supp}\left ( \sigma\right ) \\+ \infty & \text{otherwise}\end{array } \right . .\label{eq : vn - rel - ent}\ ] ] in the above and throughout the paper , we take the logarithm to be base two .furthermore , the strong converse part of the lemma states that any attempt to make the type ii error decay to zero with a decay exponent larger than the relative entropy will result in the type i error converging to one in the large limit .the direct and the strong converse parts can be succinctly written as that is , for any threshold value , the optimal type ii error decays exponentially fast in the number of copies , and the decay rate is equal to the relative entropy .it is easy to see that the negative logarithm of the optimal type ii error , is non - negative and monotonic non - increasing under completely positive trace - preserving maps .thus , it can be considered as a generalized divergence or generalized relative entropy and it was named hypothesis testing relative entropy in . with this notation , stein s lemma can be reformulated as as a refinement of the quantum stein s lemma , one can study the optimal type i error given that the type ii error decays with a given exponential speed .one is then interested in the asymptotics of the optimal type i error with a constant . in the direct domain ,when , also decays with an exponential speed , as was shown in .the exact decay rate is determined by the quantum hoeffding bound theorem as where is a quantum rnyi relative entropy , to be defined later , and is the hoeffding divergence of and . on the other hand , in the strong converse domain , when , goes to exponentially fast .the rate of this convergence has been determined in in terms of the limit of post - measurement rnyi relative entropies .a `` single - letter '' expression has been obtained recently in as where is an alternative version of the quantum rnyi relative entropy , and is the hoeffding anti - divergence .note that it is unique to the quantum case that one requires a rnyi relative entropy for the strong converse domain which is different from that used in the direct domain ( however , these rnyi relative entropies coincide when and commute , i.e. , the classical case ) . the results in and give a complete understanding of the trade - off between the two error probabilities in the asymptotics .note that the quantum stein s lemma can also be recovered from and in the limit .we remark that there are other ways of refining our understanding of the quantum stein s lemma , as established recently in .the objectives of channel discrimination are very similar to those of state discrimination ; what makes the problem different is the complexity of the available discrimination strategies . in the general setupwe have a quantum channel with input system and output system , and we know that the channel is described by either or , where and are completely positive trace - preserving ( cptp ) maps .we assume that we can use the channel several times , consecutive uses are independent , and the properties of the channel do not change with time .thus , uses of the channel are described by either or . a non - adaptive discrimination strategy for uses of the channelconsists of feeding an input state into the -fold tensor - product channel , and then performing a binary measurement on the output , which is either or . here, is an ancilla system on which the channel acts trivially as the identity map .when an adaptive strategy is used , the output of the first uses of the channel can be used to prepare the input for the -th use ; see figure [ figurechanneldiscrim1 ] for a pictorial explanation and section [ sec : channel stein ] for a precise definition . for any discrimination strategy ,let and denote the output of the -fold product channel depending on whether the channel is equal to or . in analogy with - , one can define the type i and the type ii errors as where is the measurement part of the strategy .it is then natural to consider the optimal error probabilities where denotes the set of allowed discrimination strategies and the optimisations are over all strategies in the class . here, we will consider for adaptive and for product strategies .the latter are all non - adaptive strategies with an input state , where is an arbitrary state on and some ancilla . obviously , if only product strategies are allowed ( ) , then the optimal rates of these error probabilities are given by the corresponding channel divergences as according to the previously explained results on state discrimination .note that in an infimum is taken ; the reason is that , in the strong converse domain , the goal is to minimise the exponent of the success probability .the hoeffding ( anti-)divergences can also be expressed as where and are the channel rnyi relative entropies : the optimizations in and are taken over all possible bipartite states with an arbitrary ancilla system . for adaptive strategies ,the relations - are not expected to hold for arbitrary channels .for instance , the results of provide some evidence in this direction .( see as well for related results and more general conclusions . )there are various classes of channels , however , for which - hold ; for these channels , adaptive strategies do not offer any benefit over product strategies .for instance , hayashi showed - with for any pair of classical channels .another extreme case is when both and are replacer channels , i.e. , there exist states such that and .obviously , in this case all the channel divergences are equal to the corresponding divergences of the two states ; e.g. , , etc .it is also heuristically clear that adaptive strategies do not offer any benefit over product strategies , and the channel discrimination problem reduces to the state discrimination problem between and , described before .two of our main results , theorems [ stein ] and [ scrtheorem ] yield as a special case a mathematically precise argument for these heuristics in the case of and .a natural intermediate step towards determining the error exponents of the general quantum channel discrimination problem is to allow one of the channels to be arbitrary , while keeping the other channel a replacer channel .this setup interpolates between the fully understood case of state discrimination and the still open problem of general quantum channel discrimination .here we consider the setup in which the first channel is arbitrary and the second channel is a replacer channel .we prove ( stein s lemma ) in section [ sec : stein proof ] , and show in section [ sec : sc proof ] that the strong converse exponent is given as in for adaptive strategies ( ) . as for now , we leave the optimality part of open for . as a consequence of these results , in section [ sec : ea sc proof ]we can establish a strong converse theorem for the quantum - feedback - assisted capacity of a channel , which is the capacity of a quantum channel for transmitting classical information with the assistance of a noiseless quantum feedback from receiver to sender .our result here strengthens that of bowen s .we also make a connection between our results and quantum illumination in section [ sec : quantum - illum ] . finally , in section [ sec : extension ] , we discuss how to combine the recent results in with ours to obtain a quantum stein s lemma in a setting more general than that considered in either paper .this gives a novel operational interpretation of the mutual information of a quantum channel , different from that already found in entanglement- and quantum - feedback - assisted communication .we also discuss an open question regarding the characterization of the strong converse exponent in this more general setting .our first result is a generalization of the quantum stein s lemma in ( [ eq : steins - htre ] ) to the setting of adaptive quantum channel discrimination . in particular, we study the difficulty of discriminating between an arbitrary quantum channel and a replacer channel that discards its input and replaces it with a fixed state .an important physical realization of this problem is in quantum illumination ( discussed more in section [ sec : quantum - illum ] ) .we show that a tensor - power strategy is optimal in this case , so that there is no need to consider the most general adaptive strategy ( at least in the asymptotic regime ) .this can be seen as a quantum stein s lemma for this task ; if one optimises the type ii error under the constraint that the type i error is less than some fixed constant , then the optimal type ii error probability can not decrease to zero exponentially faster than a rate determined by the relative entropy .otherwise , the type i error necessarily converges to one .it is straightforward to employ the direct part of the established quantum stein s lemma from in order to establish the direct part for our setting ..,width=576 ] .,width=576 ] in more detail , the most general adaptive discrimination strategy is depicted in figures [ figurechanneldiscrim1 ] and [ figurechanneldiscrim2 ] .it consists of a choice of input state , a sequence of adaptive quantum channels , and finally a quantum measurement to decide which channel was applied. let denote the output state at the end of the adaptive discrimination strategy ( before the final measurement is performed ) when the channel being applied is , and let denote the output state at the end of the adaptive discrimination strategy when the channel being applied is .let denote the adaptive hypothesis testing relative entropy , which generalizes by allowing for an optimization over all possible adaptive strategies used to discriminate between and .we define it formally as follows: where the infimum is over all measurement operators subject to and all preparation states subject to and tr , and all adaptive quantum channels we can now state our first main result : [ stein ] let be a fixed constant .let be an arbitrary quantum channel and let be the replacer quantum channel , for some fixed density operator . then the channel version of stein s lemma , holds , i.e. , for any .it suffices to take system isomorphic to system in the above optimization .this theorem clearly generalizes the quantum stein s lemma in ( [ eq : steins - htre ] ) .it implies that a tensor - power discrimination strategy is optimal allowing for an adaptive strategy yields no asymptotic improvement .that is , one should simply prepare copies of the bipartite state optimizing ( [ eq : adaptive - steins ] ) , send each system through each channel use ( creating the state ^{\otimes n} ] ) , and finally perform a collective measurement on all systems to decide which channel was applied .next , we refine our analysis by identifying the strong converse exponent for the task of discriminating between an arbitrary quantum channel and a replacer channel .it is easy to see ( by considering ) that theorem [ stein ] implies that for any rate , there exists a sequence of non - adaptive strategies , along which the type i error goes to zero , and the type ii error vanishes exponentially fast , with a rate at least .this is usually referred to as the direct part of stein s lemma .moreover , it also implies that the strong converse property holds , i.e. , for any sequence of adaptive srategies , if the type ii error vanishes exponentially with a rate , then the type i error goes to ( this can be seen by taking ) .our aim is to determine the speed of convergence of the type i error to in the strong converse domain , for any decay rate of the type ii errors . as it turns out , this convergence is also exponential , and hence our aim is to determine the exact values of the strong converse exponents : where the infimum is over all sequences of adaptive measurement strategies , specified by measurement operators subject to , preparation states , and adaptive quantum channels }\equiv\left\ { \mathcal{a}_{r_{i}b_{i}\rightarrow r_{i+1}a_{i+1}}^{\left ( i\right ) } \right\ } _ { i\in\left\ { 1,\ldots , n-1\right\ } } .\ ] ] we establish the following theorem : [ scrtheorem ] let be an arbitrary quantum channel , and let be the replacer quantum channel , for some fixed density operator .for any , \label{scrtheroem1}\\ & = \inf_{\psi_{ra}}\sup_{\alpha>1}\frac{\alpha-1}{\alpha}\left [ r-\widetilde{d}_{\alpha } ( { \mathcal{n}}_{a\rightarrow b}(\psi _ { ra})\vert\psi_{r}\otimes\sigma_{b } ) \right ] \label{scrtheroem2}\\ & = \sup_{\alpha>1}\frac{\alpha-1}{\alpha}\left [ r-\widetilde{d}_{\alpha } ( \mathcal{n}\vert\mathcal{r } ) \right ] , \label{scrtheroem3}\end{aligned}\ ] ] where is defined in , the infima are taken over all possible bipartite states with an arbitrary ancilla system ; in particular , holds .moreover , the same identities hold when the infima are restricted to pure states with being a fixed copy of .when , the above statement is empty . on the other hand ,when is finite , then theorem 2 also holds for in a trivial way .indeed , by theorem [ stein ] , if then the operational quantities in are equal to , and so is , since for every , according to lemma [ mi limits ] .our results have implications for the theory of quantum illumination , which we discuss briefly here .building on prior work in , lloyd _ et al_. show how the use of entangled photons can provide a significant improvement over unentangled light when detecting the presence of an object .the goal in quantum illumination is to determine whether a distant object is present or not by employing quantum light along with a quantum detection strategy .it is sensible and traditional to take the object not being present as the null hypothesis and the object being present as the alternative hypothesis . in the usual scenario ,the transmitter and receiver are in the same location .the protocol begins with the transmitter sending a signal mode that is entangled with an idler mode still in the possession of the transmitter .let denote the state of the signal and idler mode .if the object is not present ( the null hypothesis ) , then the signal mode is lost and is replaced by a thermal state , so that the joint state becomes . clearly , this is an instance of the replacer channel .if the object is present ( the alternative hypothesis ) , then the signal beam is reflected off the object and returns to the transmitter .the resulting state is described by , where describes the noise characteristics of the reflection channel .this protocol is performed times with the receiver storing either the state ^{\otimes n} ] .the receiver finally performs a collective measurement on all of the systems in order to decide whether the object is present .thus , we have a quantum channel discrimination problem in which one seeks to distinguish between a replacer channel and a noisy channel .however , our results do not apply to this setting if one takes the null and alternative hypotheses in the natural way suggested above .an alternative scenario is that in which the transmitter and receiver are in different locations .it is technologically more challenging to take advantage of quantum illumination in this setting , due to the fact that the transmitter and receiver need to share and store entanglement over a potentially large distance .nevertheless , this is the setting to which our results apply . given that the null hypothesis in this setting corresponds to the object not being present , the channel applied to the transmitted mode will be , which characterizes the optical loss in the transmission .since the alternative hypothesis in this setting corresponds to the object being present and such an object will reflect the light incident on it , the signal beam does not make it to the receiving end and the receiver instead detects thermal noise , so that the channel applied to the transmitted mode is the replacer channel .thus , the type i and type ii errors for this setting correspond to our setting described in the previous sections .implicit in prior analyses on quantum illumination is the assumption that a tensor - power , non - adaptive strategy is optimal .our results support this assumption ( at least in the particular setting of asymmetric hypothesis testing described above ) by showing that no asymptotic advantage is provided by instead using an adaptive strategy for quantum channel discrimination .it remains an open question to determine if a tensor - power , non - adaptive strategy is optimal in the symmetric hypothesis testing setting considered in .there is a well - known connection between hypothesis testing and channel coding , first recognized by blahut , and this connection also holds for quantum channels .the direct part of the channel coding theorem ( i.e. , the holevo - schumacher - westmoreland theorem ) can be obtained from the direct part of stein s lemma , as shown in .one consequence of theorem [ stein ] is a strong converse theorem for the quantum - feedback - assisted classical capacity of a quantum channel . in prior work, bowen proved that a noiseless quantum feedback channel does not increase the entanglement - assisted capacity of a noisy channel , by proving a weak converse for its quantum - feedback - assisted capacity .that is , bowen proved that the quantum - feedback - assisted capacity of a channel is equal to its entanglement - assisted capacity , denoted by however , bowen s result did not exclude the possibility of a trade - off between the communication rate and the error probability ; our strong converse theorem shows that no such trade - off is possible in the asymptotic limit of many channel uses .a strong converse theorem in this context states that for any coding scheme , which seeks to transmit at a rate strictly higher than the capacity of the channel , the probability of successful decoding decays to zero exponentially fast in the number of channel uses .so our result sharpens bowen s , strengthens the main result of , and generalizes ( * ? ? ? * theorem 7 ) to the quantum case .the approach taken is inspired by that used by nagaoka , who derived the strong converse theorem for any memoryless quantum channel from the monotonicity of the rnyi relative entropies .polyanskiy and verd later generalised this approach to show how a bound on the success probability could be derived from any relative - entropy - like quantity that satisfies certain natural properties .this approach has already been used to prove several strong converse theorems for quantum channels ; here we shall use the sandwiched rnyi relative entropy . with the proof of theorem [ stein ] in hand, it requires only a little extra effort to prove a strong converse for the quantum - feedback - assisted capacity of a quantum channel ( the capacity when unlimited use of a noiseless quantum feedback channel from receiver to sender is allowed ) .[ thm : sc - feedback]let denote the success probability of any rate quantum - feedback - assisted communication code for a channel that uses it times .the following bound holds where is the sandwiched rnyi mutual information of the channel , defined in . as a consequence of this bound, we can conclude a strong converse : for any sequence of quantum - feedback - assisted codes for a channel with rate , the success probability decays exponentially to zero as .note that the second statement in theorem [ thm : sc - feedback ] has in fact already been proved in ( * ? ? ?* section iv - e1 ) , via the channel simulation technique . however , our new contribution here is to provide the bound in ( [ eq : qfa - strong - conv - exponent ] ) on the strong converse exponent , in addition to providing an arguably more direct proof of the theorem .it remains an open question to determine if the strong converse exponent bound in ( [ eq : qfa - strong - conv - exponent ] ) is optimal ( i.e. , if there exists a quantum - feedback - assisted communication scheme achieving this exponent in the strong converse regime ) .for two hilbert spaces , let denote the set of bounded linear operators from to .when , we use the shorthand notation . we restrict ourselves to finite - dimensional hilbert spaces throughout this paper .the schatten -norm of an operator is defined as for .let denote the subset of positive semi - definite operators ; we often simply say that an operator is positive if it is positive semi - definite . we also write if .an operator is in the set of density operators if and .we denote by and the set of positive definite operators and states on , respectively .the tensor product of two hilbert spaces and is denoted by . given a bipartite density operator , we write for the reduced density operator on system .a linear map is positive if whenever .let id denote the identity map acting on a system .a linear map is completely positive if the map id is positive for a reference system of arbitrary size . a linear map is trace - preserving if for all input operators .if a linear map is completely positive and trace - preserving ( cptp ) , we say that it is a quantum channel or quantum operation . a positive operator - valued measure ( povm )is a set of positive operators such that .the quantum rnyi relative entropy of order between two non - zero positive semidefinite operators and is given by {cc}\frac{1}{\alpha-1}\log \frac{1}{\operatorname{tr}\rho}\text{tr}\left\ { \rho^{\alpha}\sigma^{1-\alpha}\right\ } & \text{if } \rho\not \perp \sigma\text { and ( supp}\left ( \rho\right ) \subseteq\text{supp}\left ( \sigma\right ) \text { or } \alpha\in\lbrack 0,1)\text{\ ) } \\+ \infty & \text{otherwise}. \end{array } \right . ,\label{eq : renyi - rel - ent}\ ] ] with the support conditions established in . here andhenceforth we use the convention that powers of a positive semidefinite operator are taken only on its support , i.e. , if are the strictly positive eigenvalues of with corresponding spectral projections , then for every .in particular , denotes the projection onto the support of .recently , the sandwiched rnyi relative entropy was introduced .it is defined for as follows:{cc}\frac{1}{\alpha-1}\log\left [ \frac{1}{\operatorname{tr}\rho } \text{tr}\left\ { \left ( \sigma^{\left ( 1-\alpha\right ) /2\alpha}\rho\sigma^{\left ( 1-\alpha\right ) /2\alpha } \right ) ^{\alpha}\right\ } \right ] & \begin{array } [ c]{c}\text{if } \rho\not \perp \sigma\text { and ( supp}\left ( \rho\right ) \subseteq\text{supp}\left ( \sigma\right ) \\\text{or } \alpha\in(0,1)\text { ) } \end{array } \\+ \infty & \text{otherwise}\end{array } \right . .\label{eq : def - sandwiched}\ ] ] it is known that for any fixed , and in the limit , they both give the relative entropy : the rnyi relative entropies have several desirable properties which justify viewing them as distinguishability measures . in particular , satisfies the following data - processing inequality for : where is a cptp map .a similar inequality holds for when ] . according to , is jointly quasi - convex for , and by , the same holds for and ] .hence , by taking purifications of in and , the values can only increase .thus , the optimizations in and can be restricted to pure states .using lemma [ lemma : channel renyi0 ] with and , the assertions follow . note that for , the above quantities are defined using the relative entropy , and we have , and , where is defined in. we will need the following extensions of : [ mi limits ] 1 .[ channel div limit ] for any two channels , and are monotone increasing in , and 2 .[ mi limit ] for every bipartite state , and are monotone increasing in , and 3 .[ channel info limit ] for every channel , and are monotone increasing in , and see appendix [ app : cb ] .the channel rnyi mutual informations also have the following geometric interpretation , as the `` distance '' of the channel from the set of all replacer channels , where the `` distance '' is measured by the channel rnyi divergences .see section [ sec : extension ] for the relevance of this geometric picture .[ lemma : geom int ] for every channel , and every , see appendix [ app : cb ] .this section provides a proof of theorem [ stein ] . in the setting of this theorem, we seek to distinguish between an arbitrary quantum channel and a replacer channel that maps all states to a fixed state , i.e. , .we allow the preparation of an arbitrary input state , where is an ancillary register .the use of a channel accepts the register as input and produces the register as output . after each invocation of the channel , an adaptive operation is applied to the registers and , yielding a quantum state or in registers , depending on whether the channel is equal to or is , for every on the left - hand side , and for every on the right - hand side .finally , a quantum measurement is performed on the systems to decide which channel was applied .such a general protocol is depicted in figures [ figurechanneldiscrim1 ] and [ figurechanneldiscrim2 ] .note that since is a replacer channel , we can write recall the hypothesis testing relative entropy from ( [ eq : htre ] ) and the adaptive hypothesis testing relative entropy from .so denotes the hypothesis testing relative entropy in which there is a fixed initial state and fixed adaptive maps .clearly , we have that by employing a tensor - power strategy with no adaptation ( i.e. , we can simply invoke the direct part of the usual quantum stein s lemma ) . in more detail ,the initial state of this strategy is the optimal in ( [ eq : ea - stein - lower - bound ] ) and each map simply prepares the state at the input of the channel while acting as the identity map on the states or ( so the strategy is non - adaptive ) . after the channel has acted , the discriminator performs a binary collective measurement on the state or to decide which channel was applied .so the lower bound in ( [ eq : ea - stein - lower - bound ] ) follows directly from the state discrimination result in ( [ eq : steins - htre ] ) .the more interesting part is to show that this strategy is asymptotically optimal , i.e. , that since this inequality is trivial when , we assume the contrary for the rest .we start by bounding the adaptive hypothesis testing relative entropy in terms of the sandwiched rnyi relative entropy . throughout this section, the parameter is assumed to be strictly larger than one and we fix some constant .we fix some input state and an adaptive strategy .lemma [ lem : sandwich - to - htre ] implies that we now focus on the term . let denote the completely positive map that conjugates by a positive operator . from ( [ entropytonorm ] ) , it follows that let us focus on the expression inside the logarithm: the equality in ( [ changetonorm ] ) follows from the characterisation of the completely bounded -norm given in ( [ cb1toalpha ] ) . rewriting this inequality in terms of the sandwiched rnyi relative entropy, we have that where the last inequality follows from monotonicity of the sandwiched rnyi relative entropy under the map . note that we are now left with the quantity which corresponds to applying the first rounds of the adaptive discrimination process .we can thus iterate the above argument through all steps of the adaptive strategy . noting that , and thus , we obtain the bound where follows from lemma [ lemma : cb ] . this bound is independent of any particular adaptive strategy used for discriminating these channels .thus , we can conclude that taking the limsup as , we get the -independent bound taking now the infimum over , the assertion follows due to lemma [ mi limits ] .having just proven a quantum stein s lemma for adaptive channel discrimination , it is then natural to study the trade - off between error probabilities , when we impose the condition that the type ii error probability has exponential decay rate for one expects the type i error to tend to one exponentially quickly .building on the above results , we identify the strong converse exponent for the channel discrimination problem ( where , as before , we assume that the alternative hypothesis is a replacer channel ) .our result generalizes the quantum state discrimination result from ( * ? ? ?* theorem iv.10 ) .the notation is the same as in the previous section ; in particular , and are as in ( [ eq : rho - adaptive ] ) and ( [ eq : tau - adaptive ] ) , respectively .recall the definitions of and from , and the definition of from .we will need the following lemma : [ lemma : channel renyi finite ] let be a quantum channel from system to system , and .the following are equivalent : 1 .[ support1 ] for every , every system , and every , .[ support2 ] for every , .3 . [ support3 ] for all .[ support4 ] for some .[ support1][support2 ] is trivial ( by taking and ) . by lemma [ lemma : channel renyi ] , [ support2][support1 ] because , which follows by iterating the general inclusion ( see , e.g. , ( * ? ? ?* appendix b.4 ) ) and applying [ support2 ] .if [ support2 ] is satisfied then is a continuous finite - valued function on the compact set , and hence its supremum is finite , proving [ support3 ] .the implication [ support3][support4 ] is trivial . finally , [ support4][support2 ] by applying the definition of .[ proof of theorem [ scrtheorem ] ] the statement is empty when , and hence for the rest we assume the contrary .we begin by proving the optimality part .\ ] ] note that if , is a sequence of adaptive strategies such that then for all large enough , , and thus , which yields the first inequality in . to prove the second inequality in , consider the output states , and the test , at the end of the adaptive discrimination strategy . by lemma [ lem :sandwich - to - htre ] and , we get .\ ] ] taking the supremum of both sides of over all strategies such that the type ii error is at most , we obtain , \label{eq : ineq - to - gen}\ ] ] which yields the second inequality in .we now establish the achievability part .\ ] ] the first inequality follows the same way as the first inequality in .let be an arbitrary system . according to theorem iv.10 and remark iv.11 in , for every state and every , there exists a sequence of tests , , such that thus , from the definition of the hoeffding anti - divergence , it is clear that is a monotone increasing convex function on .moreover , lemma iv.9 in implies that is finite for every .thus , is continuous on , and yields since this is true for every , we finally get .\label{sc upper}\ ] ] the last step is to show that the rhs of and are equal to each other .first , note that the rhs of can be written as ,\end{aligned}\ ] ] where the infimum is taken over with , due to lemma [ lemma : channel renyi ] .moreover , the rhs of can be trivially upper bounded by \end{aligned}\ ] ] ( see the proof of lemma [ lemma : channel renyi ] in appendix [ app : cb ] ) .next , define for and .introducing the new variable , we have to show that where by lemmas 3 and 4 in , is concave , and hence is convex , for any fixed . on the other hand, is convex by corollary 3.11 in , and lemma [ lemma : convexity ] below yields that is also convex , which in turn implies the concavity of for any fixed . by assumption , , andtaking into account lemma [ lemma : channel renyi finite ] , it is easy to see that is continuous for any . since the state space of is compact , the kneser - fan minimax theorem yields .[ lemma : convexity ] let be a convex function .then is convex as well .since is convex , it can be written as the supremum of affine functions , i.e. , for some , and thus as a supremum of affine functions , is convex .it is not too difficult to see that theorem [ stein ] can be reformulated the following way : [ direct part ] for every , there exists a sequence of adaptive strategies such that the type i error goes to and the type ii error decays exponentially with a rate at least .[ strong converse part ] for every , and any sequence of adaptive strategies such that the type ii error decays exponentially with a rate at least , the type i error goes to . as we have seen, the direct part is an immediate consequence of stein s lemma for state discrimination . for the proof of the strong converse part and for the proof of the optimality part of theorem [ scrtheorem ] , we followed the same argument of first using the monotonicity of the rnyi relative entropies under measurements and then applying .in fact , one could first prove the optimality part of theorem [ scrtheorem ] and obtain the optimality part of theorem [ stein ] from it in the limit .indeed , lemma [ mi limits ] implies that for any , there exists an such that , and hence the rhs of is strictly positive , from which the strong converse part of the channel stein s lemma is immediate .hayashi and tomamichel recently published their independently obtained results about a hypothesis testing scenario somewhat similar to ours .both our paper and theirs generalise the task of binary state discrimination but in different and not directly comparable directions .they consider the problem of composite hypothesis testing , where the null hypothesis is the presence of a fixed bipartite state and the alternative hypothesis is the presence of a product state that shares one marginal with the null hypothesis .considered as a channel discrimination problem , the null hypothesis is that the i.i.d .channel is applied to the systems of the input , where the input state is restricted to be a fixed tensor - power state of the form .the alternative hypothesis is that a general `` worst - case '' replacer channel is applied to the systems , which leads to an output , where could be any state on the systems . not onlydo they allow for this more general alternative hypothesis , but they also determine both the direct and the strong converse exponents in their scenario . on the other hand , one has to note that when the above result is considered as a channel discrimination problem , allowing only the tensor powers of one fixed state as an input is extremely restrictive .in contrast , our results do allow for more general input states and for the adaptive strategies that distinguish the problem of quantum channel discrimination from binary state discrimination . while the results of the two papers go in quite different directions , there is also a natural combination of them , which enables us to obtain a stein s lemma with strong converse for the following channel discrimination problem with composite alternative hypothesis .for every , the null hypothesis is that the channel is , where is a fixed channel , and the alternative hypothesis is that the channel belongs to the set , where for stein s lemma , one is interested in the asymptotics of the optimal type ii error where the infimum is over all strategies in the class with type i error below .combining theorem 11 in and theorem [ stein ] in this paper , we obtain the following : [ thm : stein comp ] in the above setting , for every , where is the channel relative entropy , and is the channel mutual information . just as in ( see below ), we have where runs over all such that , and the second inequality holds for every . by ( * ? ? ?* theorem 11 ) , for any and any rate , there exists a sequence of binary measurements , for which .\label{hbound}\end{aligned}\ ] ] by lemma [ mi limits ] , the rhs of is strictly negative for every , and hence when combined with , this yields suppose now that for some . for every , .hence the assumption yields that , and by theorem [ stein ] this is only possible if .since this is true for every , we finally get that , completing the proof of .the equality of and is due to lemma [ lemma : geom int ] .it is now natural to ask whether the exact strong converse exponent can be determined for this problem , analogously to theorem [ scrtheorem ] .below we give lower and upper bounds for the strong converse exponent .we conjecture that these bounds in fact coincide , and thus give the exact strong converse exponent ; indeed , this could be proved if one could justify interchanging the order of infima and suprema in and below .the problem can be formulated as follows .for any adaptive discrimination strategy , and any , let and be the type i and type ii error probabilities for discriminating between and , as given in .we consider the optimal type i error where denotes the set of allowed discrimination strategies and the optimisation is over all strategies in the class .as before , we take and , for product and adaptive strategies , respectively .we have where runs over all such that , and the second inequality holds for every .applying now the results of , we get that \label{composite0}\\ & = \inf_{\psi_{ra}}\sup_{\alpha>1}\sup_{\sigma\in\s(\hil_b)}\frac{\alpha-1}{\alpha}\left[r-\wt d_{\alpha}\!\bz\n_{a\to b}(\psi_{ra})\|\psi_r\otimes\sigma \jz\right],\label{composite1}\end{aligned}\ ] ] where is due to ( * ? ? ?* theorem 13 ) . on the other hand , and hence \label{composite6}\\ & = \sup_{\sigma\in\s(\hil_b ) }\inf_{\psi_{ra}}\sup_{\alpha>1}\frac{\alpha-1}{\alpha}\left[r-\wt d_{\alpha}\!\bz\n_{a\to b}(\psi_{ra})\|\psi_r\otimes\sigma \jz\right]\label{composite2}\end{aligned}\ ] ] where the two equalities are due to theorem [ scrtheorem ] .if one had joint concavity in the variables and , then one could interchange the optima and show that and are equal to each other , obtaining strong converse exponents for this channel discrimination problem .however , it remains unclear to us if the joint concavity holds or more generally if the exchange is possible .theorem [ thm : stein comp ] gives an operational interpretation to the channel mutual information , and its geometric representation given in lemma [ lemma : geom int ] .if and could be shown to be equal , that would give an analogous operational interpretation to the channel rnyi mutual informations and their geometric representation in lemma [ lemma : geom int ] , for every .in this section , we give a detailed proof of theorem [ thm : sc - feedback ] , which identifies a strong converse exponent for quantum - feedback - assisted communication and states that a strong converse theorem holds for the quantum - feedback - assisted classical capacity of a quantum channel . in an -round feedback - assisted protocol , alice andbob initially share an entangled state on alice s system and bob s system .if alice wants to transmit message , where is the number of messages , she applies a quantum channel with output system to her part of the entangled state ; the result is a state on systems , where is sent over the channel to bob , with an output in system , while is kept at alice s side for possible later use .after this , bob may apply a quantum channel on with an output on , of which system contains the feedback information , that is sent back to alice , while is kept at bob s side for possible later use .this procedure is repeated times , as depicted in figure [ feedbacktest ] ( with ) . at each round ,an encoding channel corresponding to the same fixed message is applied , but the may be different channels for different s . at the end of the protocol ,the channel is a povm on with outcomes in , specified by the povm elements . in the last round, can be taken one - dimensional , since whatever information may be stored there does not influence the outcome of the final measurement on bob s systems .we assume for simplicity that the feedback channel is noiseless , although it is not necessary to do so ; indeed , we are looking for an upper bound on the success probability , and noisy feedback can only decrease the success probability . for every stage of the communication process ,let with the appropriate labels denote the state obtained from by the action of all channels up to that stage ; e.g. , , etc .similarly , let with the appropriate labels denote the state obtained from up to a certain stage of the process , where all uses of are replaced by for some fixed state ; see figure [ feedbacktestreplacer ] . moreover , we introduce an auxiliary system with an orthonormal basis , and define where is any set of indices that can occur at a certain stage of the communication process , and such that .for every , we define as .,width=576 ] .,width=576 ] if the outcome of the final measurement is then bob concludes that the message was sent .the success probability of the protocol is given by note that for every round , is independent of , and we have this is because all information about the identity of the message is kept at alice s side all through the protocol , as one can easily see in figure [ feedbacktestreplacer ] .hence , now we can apply nagaoka s method , and use the monotonicity of to get \\ & = \frac{\alpha}{\alpha-1}\log \ps(\p_n)+\log m_n.\end{aligned}\ ] ] we will use the same iterative method as in section [ sec : stein proof ] to complete the proof of theorem [ thm : sc - feedback ] . for every , now , if then by definition , and the last term above is zero .otherwise we can upper bound the last term above as where the inequality is due to the monotonicity of under . using the above steps iteratively ,we finally get where the last identity is due to , and the supremum is over all pure states on , where is a copy of . since this is true for every , we get hence , if is a feedback - assisted coding scheme such that then where we used that holds for every .this proves of theorem [ thm : sc - feedback ] . by, the success probability goes to zero exponentially fast for any rate . by the monotonicity of the rnyi relative entropies in , , andthe latter is equal to , due to lemma [ mi limits ] .this proves the last assertion of theorem [ thm : sc - feedback ] .this paper establishes a quantum stein s lemma and identifies the strong converse exponent when discriminating an arbitrary channel from the replacer channel .the conclusion is that a tensor - power , non - adaptive strategy is optimal in this regime .this result has implications in the physical setting of quantum illumination , as described in section [ sec : quantum - illum ] .we have also proven that a strong converse theorem holds in the setting of quantum - feedback - assisted communication , strengthening a weak converse result due to bowen .this strong converse theorem also strengthens the main result of , in which a bound on the strong converse exponent was established for the entanglement - assisted communication setting .we show here that this same bound holds in the more general quantum - feedback - assisted communication setting .we also briefly discussed how to combine our results in adaptive channel discrimination with those of hayashi and tomamichel from to obtain a quantum stein s lemma in a more general setting than that considered in either paper .it remains an open question to determine the strong converse exponent for this more general setting .there are several other open questions to consider going forward from here .first , is the strong converse exponent bound in ( [ eq : qfa - strong - conv - exponent ] ) optimal ? that is ,does there exist an entanglement - assisted communication protocol that achieves this bound ?encouraging for us here is that a full solution is known for the classical version of this problem .next , can we say anything about the direct domain for either the adaptive channel discrimination setting or the quantum - feedback - assisted communication setting ? any results obtained in the latter setting would be a counterpart to the error exponents found in for classical communication .finally , would the conclusions of this paper extend to the setting of symmetric hypothesis testing ?that is , would it be possible to show that non - adaptive strategies suffice here ? _ acknowledgements_we thank nilanjanna datta , manish k. gupta , bhaskar roy bardhan , and marco tomamichel for insightful discussions on these topics .we thank david ding for feedback on the manuscript .tc and mmw acknowledge support from the department of physics and astronomy at lsu , from the nsf under award no .ccf-1350397 , and from the darpa quiness program through us army research office award no .w31p4q-12 - 1 - 0019 .mm acknowledges support from the european research council advanced grant irquat , the spanish mineco ( project no .fis2013 - 40627-p ) , the generalitat de catalunya cirit ( project no .2014 sgr 966 ) , the hungarian research grant otka - nkfi k104206 , and the technische universitt mnchen institute for advanced study , funded by the german excellence initiative and the european union seventh framework programme under grant agreement no .* proof of lemma [ lemma : channel renyi0 ] . *we only prove the assertion for , as the proof for goes the same way . for every system , defines an isomorphism between and , under which corresponds to .given a pure state , it can be written as , with , where , and .thus , for any channel , let be a partial isometry such that .then it is easy to see that where .the equality of the last two expressions follow from the fact that the channels only act on the system .this completes the proof .* proof of lemma [ lemma : cb ] .* let denote the conjugation by . with the notations in the proof of lemma [ lemma : channel renyi0 ], every pure state can be written as , .then , and hence ^{\alpha}\right\}\\ & { \mbox { } \mbox { } } = \frac{1}{\alpha-1}\log\operatorname{tr}\left\{\left[\bz ( xx^*)^{\frac{1-\alpha}{2\alpha } } \otimes\sigma_b^{\frac{1-\alpha}{2\alpha}}\jz\bz x\otimes i\jz \bz\n(\gamma_{a'a})\jz\bz x^*\otimes i\jz \bz(xx^*)^{\frac{1-\alpha}{2\alpha}}\otimes\sigma_b^{\frac{1-\alpha}{2\alpha}}\jz\right]^{\alpha}\right\}.\end{aligned}\ ] ] let for some unitary ; then , and .thus ^{\alpha}\right\}\\ & { \mbox { } \mbox { } } = \frac{1}{\alpha-1}\log\operatorname{tr}\left\{\left [ \bz |x|^{\frac{1}{\alpha}}\otimes\sigma_b^{\frac{1-\alpha}{2\alpha}}\jz\bz \n(\gamma_{a'a})\jz \bz |x|^{\frac{1}{\alpha}}\otimes\sigma_b^{\frac{1-\alpha}{2\alpha}}\jz\right]^{\alpha}\right\}\\ & { \mbox { } \mbox { } } = \frac{1}{\alpha-1}\log\operatorname{tr}\left\{\left [ \bz |x|^{\frac{1}{\alpha}}\otimes i_b\jz(\theta\circ\n)(\gamma_{a'a } ) \bz |x|^{\frac{1}{\alpha}}\otimes i_b\jz\right]^{\alpha}\right\}\\ & { \mbox { } \mbox { } } = \frac{\alpha}{\alpha-1}\log{\left\| \bz y^{\frac{1}{2\alpha}}\otimes i_b\jz(\theta\circ\n)(\gamma_{a'a } ) \bz y^{\frac{1}{2\alpha}}\otimes i_b\jz\right\|}_{\alpha } , \label{cb2}\end{aligned}\ ] ] where we used the notation . hence , optimizing over all bipartite pure states is equivalent to optimizing over all such that , i.e. , all states on .the latter yields where the first supremum in is taken over all unitaries on , and the second equality in is due to .this finishes the proof .it is easy to see that for any fixed , and are monotone decreasing on , and for these latter relations see , e.g. and .since for any fixed , and are continuous , we get that note that for any , , and hence for the rest we assume that all these quantities are finite , since otherwise is trivial .let denote the set of pure states on , where is a copy of ; then is a compact set , and is lower semicontinuous on by , while for a fixed , is monotone increasing .using now lemma [ lemma : minimax ] , we get [ mi limit]we only prove the assertion for , as the proof for goes exactly the same way .first , we have next , note that by , is lower semicontinuous in on the compact set , and it is monotone increasing in .hence , by lemma [ lemma : minimax ] , [ channel info limit]we only prove the assertion for , as the proof for goes exactly the same way .first , next , let be a fixed copy of .note that is the infimum of continuous functions , and hence it is upper semi - continuous on the compact set of pure states on . on the other hand , it is monotone in by , and hence we can use lemma [ lemma : minimax ] and to obtain * proof of lemma [ lemma : geom int ] . *let be a copy of .by lemma [ lemma : channel div repr ] , we have let . according to ( * ? ? ?* lemma 3 ) , the rnyi divergence in can be written as where ^{1/2}\bz \rho_{\hat a}^{\frac{1}{\alpha}}\otimes \sigma_b^{\frac{1-\alpha}{\alpha}}\jz[\gamma^{\n}]^{1/2}\jz^{\alpha}\right\},\ ] ] and for , and for . for , is operator concave on , and is monotone increasing and concave on positive semidefinite operators .thus is convex in .note that is affine , and applying theorem 1.1 in , with , ^{1/2}$ ] , to the quantity ( 1.3 ) in , we get that is concave in .similarly , for , is operator convex on , and is monotone increasing and convex on positive semidefinite operators .thus is convex in , and again by theorem 1.1 in , it is concave in .hence , we can use the kneser - fan minimax theorem to obtain for every .the case follows by where the second and the last identities are due to lemma [ mi limits ] .charles h. bennett , igor devetak , aram w. harrow , peter w. shor , and andreas winter . quantum reverse shannon theorem and resource tradeoffs for simulating quantum channels ., 60(5):29262959 , may 2014 .arxiv:0912.5537 .charles h. bennett , peter w. shor , john a. smolin , and ashish v. thapliyal .entanglement - assisted classical capacity of noisy quantum channels ., 83(15):30813084 , october 1999 .arxiv : quant - ph/9904023 .charles h. bennett , peter w. shor , john a. smolin , and ashish v. thapliyal .entanglement - assisted capacity of a quantum channel and the reverse shannon theorem ., 48:26372655 , october 2002 .arxiv : quant - ph/0106052 .igor devetak , christopher king , marius junge , and mary beth ruskai .multiplicativity of completely bounded -norms implies a new additivity result ., 266(1):3763 , august 2006 .arxiv : quant - ph/0506196 .martin mller - lennert , frdric dupuis , oleg szehr , serge fehr , and marco tomamichel . on quantum rnyientropies : a new generalization and some properties ., 54(12):122203 , december 2013 .arxiv:1306.3142 .si - hui tan , baris i. erkmen , vittorio giovannetti , saikat guha , seth lloyd , lorenzo maccone , stefano pirandola , and jeffrey h. shapiro .quantum illumination with gaussian states ., 101(25):253601 , december 2008 . arxiv:0810.0534 .mark m. wilde , andreas winter , and dong yang .strong converse for the classical capacity of entanglement - breaking and hadamard channels via a sandwiched rnyi relative entropy ., 331(2):593622 , october 2014 .
|
this paper studies the difficulty of discriminating between an arbitrary quantum channel and a replacer channel that discards its input and replaces it with a fixed state . the results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination . we show that , in this particular setting , the most general adaptive discrimination strategies provide no asymptotic advantage over non - adaptive tensor - power strategies . this conclusion follows by proving a quantum stein s lemma for this channel discrimination setting , showing that a constant bound on the type i error leads to the type ii error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels . the strong converse part of the lemma states that any attempt to make the type ii error decay to zero at a rate faster than the channel relative entropy implies that the type i error necessarily converges to one . we then refine this latter result by identifying the optimal strong converse exponent for this task . as a consequence of these results , we can establish a strong converse theorem for the quantum - feedback - assisted capacity of a channel , sharpening a result due to bowen . furthermore , our channel discrimination result demonstrates the asymptotic optimality of a non - adaptive tensor - power strategy in the setting of quantum illumination , as was used in prior work on the topic . the sandwiched rnyi relative entropy is a key tool in our analysis . finally , by combining our results with recent results of hayashi and tomamichel , we find a novel operational interpretation of the mutual information of a quantum channel as the optimal type ii error exponent when discriminating between a large number of independent instances of and an arbitrary `` worst - case '' replacer channel chosen from the set of all replacer channels .
|
let be a finite and totally ordered set of vertices .an increasing tree on is a tree rooted at the smallest element of such that the sequence of vertices along the branch from the root to any vertex increases .a _ random recursive tree _ ( rrt for short ) of size is a tree picked uniformly at random amongst all increasing trees on .henceforth we write for such a rrt .note that the root vertex of is given by .we consider bernoulli bond percolation on with parameter .this means we first pick and then remove each edge with probability , independently of the other edges .we obtain a partition of vertices into clusters , i.e. connected components , and we are concerned with the asymptotic sizes of these clusters .let us call percolation on a rrt in the regime * _ weakly supercritical _, if , * _ supercritical _ , if for some fixed , * _ strongly supercritical _ , if .the terminology is explained by our results : we will see that the root cluster has size , while the next largest clusters have a size of order .we encode the sizes of all percolation clusters by a tree structure , which we call the _ tree of cluster sizes_. a percolation cluster of is called a cluster of generation , if it is disconnected from the root cluster by exactly deleted edges . in the tree of cluster sizes , vertices of level are labeled by the sizes of the clusters of generation .consequently , the root vertex represents the size of the root cluster of .then , if a vertex represents the size of a cluster of generation , its children are given by the sizes of those clusters of generation which are separated from by one deleted edge ( see figure ) .we normalize cluster sizes of generation by a factor . after a local re - ranking of vertices , we show that the tree of cluster sizes converges in distribution to the genealogical tree of a continuous - state branching process ( csbp ) in discrete time , with reproduction measure on and started from a single particle of size ( theorem [ t1 ] ) .moreover , we obtain precise limits for the largest non - root clusters ( corollary [ c1 ] ) .asymptotic cluster sizes have been studied for numerous other random graph models . at first place , these include the erds - rnyi graph model ( see alon and spencer ( * ? ? ?* chapter 11 ) for an overview with further references ) . concerning trees , uniform cayley trees of size been studied in the works of pitman , and pavlov in the regime , fixed , where the number of giant components is unbounded . for general large trees , bertoin gives in a criterion for the root cluster of a bernoulli bond percolation to be the ( unique ) giant cluster . unlike random cayley trees of size , whose heights are typically of order , random recursive trees have heights of logarithmic order ( see e.g. the book of drmota ) .bertoin proved in that in the supercritical regime when , the size of the root cluster of a rrt on vertices , normalized by a factor , converges to in probability , while the sizes of the next largest clusters , normalized by a factor , converge to the atoms of some poisson random measure .this result was extended by bertoin and bravo to large scale - free random trees , which grow according to a preferential attachment algorithm and form another family of trees with logarithmic height .here we follow the route of and analyze a system of branching processes with rare neutral mutations . in this way we gain control over the sizes of the root cluster and of the largest clusters of the first generation , for all regimes .an iteration of the arguments then allows us to prove convergence of higher generation cluster sizes .the methods of are based on a coupling of iksanov and mhle between the process of isolating the root in a rrt and a certain random walk .they seem less suitable for the weakly supercritical regime , where one has to look beyond the passage time up to which the coupling is valid .this was already mentioned in the introduction of , where also the question is raised how the sizes of the largest clusters behave when . in the second part of this paper , we however readopt the methods of .we consider a destruction process on , where edges are equipped with i.i.d .exponential clocks and deleted at the time given by the corresponding variable . starting with the full tree , each removal of an edge gives birth to a new tree component rooted at the outer endpoint of .the order in which the tree components are cut suggests an encoding of their sizes and birth times by a tree - indexed process , which we call the _ tree of components _ ( see figure for an example ) .keeping track of the birth times allows us to consider only those tree components which are born in the destruction process up to a certain finite time . interpreting the latter as a version of a bernoulli bond percolation on , tree components are naturally related to percolation clusters .this observation was made by bertoin in and then used to study cluster sizes in the supercritical regime .we further develop these ideas in the last section and make the link to our results on percolation from the first part of this paper .the starting point is a limit result for the tree of components ( theorem [ t2 ] ) , which we believe is of interest on its own .the destruction process can be viewed as an iterative application of the cutting down or isolation of the root process , which has been analyzed in detail for rrt s in meir and moon , panholzer , drmota _ et al . _ , iksanov and mhle , bertoin and others .the tree of components should be seen as a complement to the so - called _ cut - tree _ , which was studied for random recursive trees by bertoin in .we briefly recall its definition at the very end .the rest of this paper is organized as follows .the goal of section [ streeofclustersizes ] is to prove our main result theorem [ t1 ] on the sizes of percolation clusters .we first introduce the tree of cluster sizes and state the theorem .then we establish the connection to yule processes and obtain the asymptotic sizes of the root and first generation clusters .we then turn to higher generation clusters in the tree and finish the proof of theorem [ t1 ] .section [ streeofcomponents ] is devoted to the analysis of the destruction process of a rrt .at first we define the tree of components and formulate our main result theorem [ t2 ] for this tree .the splitting property of random recursive trees transfers into a branching property for the tree of components , which we illustrate together with the coupling of iksanov and mhle in the next part . then we prove theorem [ t2 ] .in the last part , we sketch how our analysis of the destruction process leads to information on percolation clusters in the supercritical regime .we compare our results there with theorem [ t1 ] and finish this paper by pointing at the connection to the cut - tree .we finally mention that except for the very last part , section [ streeofcomponents ] on the destruction process can be read independently of section [ streeofclustersizes ] .we use a tree structure to store the percolation clusters or , more precisely , their sizes . in this direction , recall that the universal tree is given by with the convention and . in particular , an element is a finite sequence of strictly positive integers , and we refer to its length as the `` generation '' or level of .the child of is given by , .the empty sequence is the root of the tree and has length .if no confusion occurs , we simply write instead of .now consider bernoulli bond percolation on a rrt with parameter .this induces a family of percolation clusters , and we say that a cluster is of generation , if it is disconnected from the root by erased edges .this means that exactly edges have been removed by the percolation from the path in the original tree connecting to the root of the cluster . in this terminology ,the only cluster of the zeroth generation is the root cluster .we define recursively a process indexed by the universal tree , which we call the _ tree of cluster sizes_. first , is the size of the root cluster of .next , we let denote the decreasingly ranked sequence of the sizes of the clusters of generation , where , and in the case of ties , clusters of the same size are ordered uniformly at random .we continue the definition iteratively as follows .assume that for some with , has already been defined to be the size of some cluster of generation .we then specify the children of . among all clusters of generation , we consider those which are disconnected by exactly one erased edge from .similar to above , we rank these clusters in the decreasing order of their sizes and let be the size of the largest .an example is given in figure .the definition is completed by putting for all which have not been specified in the above way . in particular , for all with , and if for some , then all elements of the subtree of rooted at are set to zero .* figure 1 * + _ right : the corresponding percolation clusters , whose sizes are encoded by . _ our limit object is given by the genealogical tree of a continuous - state branching process in discrete time with reproduction measure , started from a single particle .the distribution of is characterized by induction on the generations as follows ( cf .* definition 1 ) ) . 1. almost surely ; 2 . for every conditionally on , the sequences for the vertices at generation are independent , and each sequence is distributed as the family of the atoms of a poisson random measure on with intensity , where the atoms are ranked in the decreasing order of their sizes .we turn now to the statement of theorem [ t1 ] . recall that we consider the regimes with as . in the strongly supercritical regimewhen , the root cluster has size , and if stays bounded , the next largest clusters will be of constant size only . in order to exclude this case , and since we would like to consider also higher generation clusters , we shall implicitly assume that for every .if this last condition fails , then our convergence results do still hold restricted to generations .[ t1 ] as , in the sense of finite - dimensional distributions , while the theorem specifies the size of the root cluster as tends to infinity , it does not immediately answer the question how the sizes of the next largest clusters behave .we will however see that for fixed , the largest non - root clusters are with high probability to be found amongst the largest clusters of the first generation , provided and are taken sufficiently large . [ c1 ] as , we have in probability . moreover , for each fixed , where in accordance with our definition of , are the atoms of a poisson random measure on with intensity . herewe will develop the methods that enable us to prove finite - dimensional convergence of the tree of cluster sizes for generations .we conclude this part with the proof of corollary [ c1 ] . in the next section ,we lift the convergence to higher levels in the tree and thereby finish the proof theorem [ t1 ] .the following recursive construction of a rrt forms the basis of our approach .we consider a standard yule process , i.e a continuous - time pure birth process started from , with unit birth rate per unit population size .then , if the ancestor is labeled by and the next individuals are labeled in the increasing order of their birth times , the genealogical tree of the yule process stopped at the instant is a version of . with this construction , percolation on a rrt with parameter can be interpreted in terms of neutral mutations which are superposed to the genealogical tree . in the description that followswe are guided by and .except for the ancestor , we let each individual of the yule process be a clone of its parent with probability and a mutant with probability . beinga mutant means that the individual receives a new genetic type which was not present before .the reproduction law is neutral in the sense that it is not affected by the mutations .we record the genealogy of types by the universal tree in the following way .every vertex stands for a new genetic type .the empty set represents the type of the ancestor , and for every and , the child of , i.e. , stands for the genetic type which appeared at the instant when the mutant was born in the subpopulation of type .starting from and for , we write for the size of the subpopulation of type at time , when neutral mutations occur at rate per unit population size .clearly , the sum over all subpopulations evolves as a standard yule process , and we will henceforth work with defined in this way .moreover , interpreting the genealogical tree of as a rrt as above , the sizes of the clusters of generation are given by the variables with .note however that in the tree of cluster sizes , the children of each element are decreasingly ordered according to their sizes , while in the population model , the sequence for is ordered according to the birth times of the mutants stemming from type , i.e. type was born before type for .let us denote the birth time of the subpopulation of type by clearly , for , each variable is almost surely finite .moreover , each process for is distributed as a continuous - time pure birth process with birth rate per unit population size , started from a single particle .once an individual of a new genetic type appears , the population of that type evolves independently , which shows that the processes for are independent .the sequence of subpopulations bearing a single mutation is moreover independent from the sequence of its birth times : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the processes for are i.i.d . and independent of the sequence of birth times . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for a formal proof of this statement , see ( * ? ? ?* lemma 1 ) .our first aim is to obtain a joint limit law for and , , when and . in this direction, we recall that is non - negative square - integrable martingale with terminal value given by a standard exponential variable .the next lemma , which is similar to lemma 2 in , shows that the speed of convergence is exponential .[ l2bebr ] for every , one has similarly , for fixed , , , is a martingale .its terminal value is another standard exponential random variable , but for tending to , converges to .more specifically , ( * ? ? ?* lemma 3 ) reads in our case as follows .[ l3bebr ] in particular , converges to in as .in order to prove a joint limit law for the processes , , we need information on their birth times when .this is achieved by the next lemma , which corresponds to ( * ? ? ?* lemma 4 ) .[ l4bebr ] as , in the sense of finite - dimensional distributions , where , and are i.i.d .standard exponential variables . as for lemmas [ l2bebr ] and [ l3bebr ], one can follow the proof of to get this last result for our model . as a consequence, we see that for all i.e. the set of values is stochastically bounded as . for the rest of this section, we let depend on such that as , but write mostly instead of .we first compute the asymptotic size of the ancestral subpopulation , or , to put it differently , the asymptotic size of the root cluster of a bernoulli bond percolation on with parameter .[ c2bebr ] since a.s ., we have a.s . , implying on the other hand , we know from lemmas [ l2bebr ] and [ l3bebr ] that this proves the statement . from the proof of the foregoing lemma, we deduce that _ a fortiori _ , as .we recall that for theorem [ t1 ] we additionally require for all .we will now implicitly assume this at least for , so that in particular , we now consider two independent sequences and of i.i.d .standard exponential random variables .we shall assume that they are both defined on the same probability space .as before , let .theorem of tailored to our needs yields finite - dimensional convergence of the , . [ t2bebr ] as , in the sense of finite - dimensional distributions , for and , put let be a family of random times with in probability . from lemma [ l3bebr ] ( with and in place of and ) we infer that there is the convergence in the sense of finite - dimensional distributions concerning the birth times , we have by lemma [ l4bebr ] the finite - dimensional convergence by the remark below the definition of , the sequence is independent from , so that we have in fact joint weak convergence towards .we finally let . then in probability for by and . by the mapping theorem, the product of the above left hand sides converges to , as claimed .in order to obtain convergence of the tree of cluster sizes for the first generation , we have to rank the sequence in the decreasing order of their elements .note that finite - dimensional convergence for the reordered sequence can not directly be deduced from proposition [ t2bebr ] .we first have to show that for fixed , the largest subpopulations of generation at time are with high probability to be found amongst the oldest when and . in view of the last proposition and, we have to ensure that at time , we see only with small probability a subpopulation of size of order which bears a single mutation and was born at a time much later than . for later use , namely for the proof of corollary [ c1 ] , it will be helpful to consider also subpopulations with more than one mutation . for that purpose ,let us list the full system of subpopulations in the order of their birth times .we obtain a sequence which represents the same process as , such that , , and if less than mutants were born up to time .moreover , is a subsequence of which corresponds to the subpopulations with a single mutation .we denote by the number of subpopulations born up to time , discounting the ancestral population of type .our next statement resembles ( * ? ? ?* lemma 7 ). however , in our setting we have to be more careful with the estimates .[ l7bebr ] let .then denote by the natural filtration generated by the system of processes .the counting process is -adapted , and its jump times are -stopping times . by the strong markov property , we see that each of the processes for is a yule process started from a single particle of size , with birth rate per unit population size . moreover , is independent of .we let and , where . the number of processes born after time , which have at time a size greater than is given by for fixed , is geometrically distributed with parameter , see e.g. yule .we obtain , using the bound in the second inequality , \dt n^{(p)}(t)\right).\end{aligned}\ ] ] the dynamics of the family and the strong markov property entail that grows at rate , where is a standard yule process .in particular , is a martingale . since , we get with the substitution in the second line \dt t\\ & = \frac{1-p}{p}\int_{\e^{pr_n}}^{\e^{ps_n}}x^{(1-p)/p}\exp\left[-x(\ve/2)(1-p)n^p\exp(-ps_n)\right]\dt x.\end{aligned}\ ] ] we perform an integration by parts and substitute the values of and .this gives \\ & + \;\ ; 2\frac{\e^{ps}(1-p)}{\ve p^2}\int_{\e^{pr_n}}^{\e^{ps_n}}x^{(1 - 2p)/p}\exp\left[-x(\ve/2)(1-p)\exp(-ps)\right]\dt x.\end{aligned}\ ] ] for large , we have and on the domain of integration . for such +(16/\ve^2)\e^{2ps}\exp\left[-(\ve/2 ) \e^{p(r - s)}\right].\ ] ] in particular , for fixed , ,\ ] ] and the right side converges to zero when and , are fixed .we have shown that since by , the lemma is proved .we are now in position to prove theorem [ t1 ] restricted to generations and .we recall that with and as .[ pgen01conv ] as , in probability , and for every fixed , the convergence of the root cluster was already shown in lemma [ c2bebr ] .indeed , it follows from our construction that is distributed as the size of the root cluster of a bernoulli bond percolation on with parameter .next , we deduce from proposition [ t2bebr ] and together with lemma [ l7bebr ] that if we write for the decreasing rearrangement of a sequence of positive real numbers with either pairwise distinct elements or finitely many non - zero terms , we have in the sense of finite - dimensional distributions as tends to infinity . since only remains to identify the limit on the right hand side of .recalling that is independent of , can be viewed as the sequence of atoms of a poisson point process on with intensity , and the claim follows .we now turn to the proof of corollary [ c1 ] stated in section [ smainresults ] . in view of what we have already proved , it will be sufficient to check that for each fixed , the subpopulations which were born up to time carry all a single mutation with high probability when similarly to the definition of , we let denote the number of subpopulations with a single mutation at time .the following statement is similar to lemma in .[ l6bebr ] let denote the number of subpopulations born up to time , which bear more than one single mutation .then for each , let .since is a martingale , similarly , we obtain using , and , a small computation shows for . from , lemmas [ l7bebr ] , [ l6bebr ] and proposition [ t2bebr ], we see that the sizes of the largest non - ancestral subpopulations at time are taken by the subpopulations with a single mutation only .recalling the connection between subpopulations at time and percolation clusters , the proof of the corollary is then a consequence of proposition [ pgen01conv ] .the recursive structure of allows us to transfer the arguments of the foregoing section to higher generation clusters .we however need some preparation .let be a rrt on , as usual . hereit will be convenient to label the edges of by their outer endpoints , i.e. the edge joining vertex to vertex , where , is labeled .we then say that is the edge of .we incorporate bernoulli bond percolation on , but instead of deleting edges , we simply mark them with probability each , independently of each other .after such a marking of edges , we call a subtree of _ intact _, if it contains only unmarked edges and is maximal in the sense that no further edges without marks can be attached to it .in other words , the intact subtrees of are precisely the percolation clusters of .we again view as the genealogical tree of a standard yule process stopped at the instant when the individual is born .henceforth we will identify vertices with individuals , i.e. we will make no difference between the vertex labeled and the individual of the population system .the marked edges indicate a birth event of a mutant .this means that if the edge is a marked edge , then the individual is a mutant , and the vertices of the intact subtree rooted at correspond to the individuals bearing the same genetic type as the individual .moreover , the genetic type of the individual can be derived from the subtree of spanned by the vertices and from the marks on its edges .our description shows that we may generate the subpopulation sizes , , by first picking a rrt , then marking each edge with probability , independently of each other , and then defining to be the size of the intact subtree of rooted at the mutant of type .let us write for the full genealogical ( sub)tree which stems from the mutant of type .this means that is the maximal subtree of rooted at the mutant of type , including all marked and unmarked edges above its root .clearly , might contain several intact subtrees of , and we agree that is given by itself .moreover , we let if there is no mutant of type . for example , the non - empty vertex sets of the genealogical subtrees of the recursive tree on the left side of figure ( dashed lines represent the marked edges ) are given by , , , , .let us introduce the following terminology .for an arbitrary subset of size , we call the bijective map from to , which preserves the order , the _ canonical relabeling _ of vertices . clearly , the canonical relabeling transforms a recursive tree on into a recursive tree on .we next observe that conditionally on its size and upon the canonical relabeling of its vertices , is itself distributed as a rrt on . indeed , as we pointed out above , in order to decide whether a given vertex of is the root of the subtree encoded by , we have to look only at the subtree ( with its marks ) spanned by the vertices . in particular , the structure of the subtree stemming from is irrelevant .if we condition on and perform the canonical relabeling of vertices , the recursive construction then implies that each increasing arrangement of the vertices is equally likely , that is to say is a random recursive tree .moreover , if do not lie on the same infinite branch of emerging from the root , then and are conditionally on their sizes independent rrt s , since their vertex sets are disjoint .we remark that these properties of random recursive trees are closely related to the so - called _ splitting property _ , which plays a major role in our analysis of the destruction process of a rrt in section [ streeofcomponents ] .we henceforth call a subtree with a subtree of generation . our final main step for proving theorem [ t1 ] is a convergence result for the `` tree of subtree sizes '' . for that purpose , we decreasingly order the children of each element , but keep the parent - child relation .more precisely , each element has finitely many non - zero children , say , and we let be the random bijection which sorts this sequence in the decreasing order , i.e. with for . out of these mapswe define the global random bijection recursively by setting , , and then , given , , , .note that indeed preserves the parent - child relation , i.e. children of are mapped into children of .we recall that , and for each .[ ptreeofsubtreesizes ] as , in the sense of finite - dimensional distributions , the convergence of is trivial .let us first show that in the sense of finite - dimensional distributions , as , fix and denote by the size of the root percolation cluster inside , i.e. the size of the intact subtree with the same root node as . since conditionally on its size ( and upon the canonical relabeling ) , is a random recursive tree , lemma [ c2bebr ] shows that for each , furthermore , in the notation from section [ syule ] we have the equality in distribution proposition [ t2bebr ] ( with , defined there ) , the last two displays and the fact that imply the convergence in distribution from lemma [ c2bebr ] and together with lemma [ l7bebr ] , we know that the largest subtrees amongst , , are with high probability to be found under the first , provided and are large . with our identification of the ranked sequence from the proof of proposition [ t2bebr ] , the last display therefore implies . we next show that in the sense of finite - dimensional distributions , as we already remarked , for disjoint integers , the rrt s and are conditionally on their sizes _ independent _ rrt s .since we have just proved finite - dimensional convergence of , it will therefore be enough to show that for ] be a continuous function . recall that is a family of i.i.d .standard exponentials , which is independent of the family .choosing so large such that , we obtain by the remark above -\e\left[f((\pz_{1,t},z_{1,t}),\dots,(\pz_{\ell , t},z_{\ell ,t}))\right]\right| \leq \ve.\ ] ] we will now prove that if is large , then for all large enough also \right.}\nonumber\\ & -\left.\e\left[f\left(\left(\frac{\ln n}{n}\pb_{\mu(1),t}^{(n)},b_{\mu(1),t}^{(n)}\right),\dots,\left(\frac{\ln n}{n}\pb_{\mu(\ell),t}^{(n)},b_{\mu(\ell),t}^{(n)}\right)\right)\right]\right| \leq \ve.\end{aligned}\ ] ] here , since is fixed , we write instead of and similarly for the birth times .first , it follows from lemma [ l5 ] that for , there exists such that for each and for all sufficiently large , next , if is the size of the largest tree component amongst those which were cut off from the root component in the destruction process on at a time , then on the event , there is the equality of random vectors therefore , follows if we show that for large and all large , write for the number of edges of the root component in the destruction process at time . by the splitting property ,conditionally on , the variable is distributed as the size of the largest tree component which was produced by the algorithm for isolating the root of a rrt of size .we now claim that converges in distribution as to the largest atom of a poisson random measure on with intensity . indeed , if is a sequence of of i.i.d .copies of , see , then for , the number of indices such that is binomially distributed with parameters and . combining the first part of with theorem 16.16 of kallenberg , we deduce that the largest variable among , normalized by a factor , converges in distribution to largest atom of a poisson random measure on with intensity .clearly , under the coupling of iksanov and mhle , is the size of the remaining root component after edge removals in the algorithm for isolating the root . since in probability by ,an appeal to the coupling proves our claim about .we finally notice that by the second part of lemma [ l5 ] , in probability , that is converges in distribution to the largest atom of a poisson random measure on with intensity .choosing large enough , follows .since can be chosen arbitrarily small , an application of the triangle inequality together with lemma [ l5 ] finishes the proof of proposition [ p1 ] .the convergence of is trivial , and proposition [ p1 ] shows the convergence of generation .let us now show that also as in the sense of finite - dimensional laws .let .employing lemma [ l4 ] and proposition [ p1 ] , it suffices to show that for \rightarrow [ 0,1]$ ] bounded and uniformly continuous , and , }\\ & \rightarrow \e\left[g(\pz_j , z_j)f^{(1)}(\pz_ja_{1},z_j + b_{1})\dots f^{(\ell)}(\pz_ja_{\ell},z_j+b_{\ell})\right],\hspace{3cm}\end{aligned}\ ] ] where for , is the atom with the largest first coordinate of a poisson random measure on with intensity .we consider only the case . by lemma [ l5 ] , we have for each integer with and almost all the equality of the conditional densities = \e_m\left[f^{(1)}\left(\frac{\ln^2 n}{n}\pb_{\ast}^{(m)},s+\frac{\ln n}{\ln m}b_{\ast}^{(m)}\right)\right],\ ] ] where is the mathematical expectation starting from a random recursive tree with vertices , and under , is in the first coordinate the size and in the second the birth time of the largest tree component of the first generation produced by a destruction process on with parameter . now if for some fixed , integer - valued , we obtain from proposition [ p1 ] that \sim \e\left[f^{(1)}(aa_1,s+b_1)\right],\ ] ] where is the atom with the largest first coordinate of a poisson random measure on with intensity . on the other hand, we already know that the pair converges in distribution as towards .since the map is uniformly continuous on bounded sets , this establishes .the arguments can now easily be extended to the subsequent generations , and the theorem is proved . in , bertoin uses the coupling of iksanov and mhle to study the asymptotic sizes of the largest and next largest percolation clusters of a supercritical bernoulli bond percolation on with parameter let us recall his strategy . if the destruction process ( with parameter ) is stopped at time , then one observes a bernoulli bond percolation on with parameter . under this coupling ,the tree components born in the destruction process up to time contain the non - root percolation clusters of .in fact , each such percolation cluster of can be identified with a subtree of a tree component rooted at the same vertex , meaning that within its surrounding component , the percolation cluster forms the root cluster .the usefulness of this point of view comes from two facts .firstly , we know from the second part of lemma [ l5 ] that in the regime , the root cluster of a rrt has size as . secondly , the asymptotic sizes of the tree components can be specified ( see proposition [ p2 ] ) . in order to reveal the inner root percolation cluster inside a tree component ,the latter has to be `` unfrozen '' , i.e. some additional edges have to be erased .this approach was used by bertoin to study the sizes of the root percolation clusters inside the tree components of the first generation , and our aim is to outline how these ideas can be extended to all clusters .we first lift the convergence of lemma [ l5 ] to higher generations . towards this end , let then we can use lemma [ l5 ] instead of proposition [ p1 ] to obtain a limit result for the ranked version . here , by a small abuse of notation , is a random bijection that sorts the children of each element in the decreasing order of their first coordinate , keeping the parent - child relation . the limit process is obtained from by first removing those pairs with and then by a relabeling of the remaining elements .alternatively , in accordance with lemma [ l5 ] , the law of the limit can also be specified as follows . 1 . almost surely ; 2 . for every conditionally on , the sequences for the vertices at generation independent , and each sequence is distributed as the family of the atoms of a poisson random measure on with intensity , ranked in the decreasing order of the first coordinate .now the tree components have to be unfrozen to observe the percolation clusters inside .write for the tree component whose size and birth time is stored in ( with if there is no such component , and ) .say we want to determine the size of the root percolation cluster inside the tree component .this component was cut off from a bigger subtree at time . by the memoryless property of exponential variables ,we are therefore lead to perform a bernoulli bond percolation on with parameter , and adapting the arguments of , we deduce that the root cluster of has size more generally , denote by for the percolation cluster with the same root as ( under our coupling with the destruction process ) . in the percolation regime , we have using proposition [ p2 ] , the last two displays and similar arguments as in the proof of theorem [ t2 ] , we obtain the following limit result for the cluster sizes . extending the arguments of (* lemma 6 , 7 ) to higher levels in the tree , we moreover see that corollary [ c2 ] remains true if we apply our usual ranking operation to both sides .denote by the ranked version of , i.e. , where is a random bijection sorting the children of each element in the decreasing order , such that the parent - child relation is preserved .for the right hand side , let us write and for the ranked version of .then the convergence in corollary [ c2 ] transfers to the ranked versions , i.e. in the sense of finite - dimensional distributions in the regime .it is now instructive to compare this last convergence result with theorem [ t1 ] .we first remark that as for the tree of cluster sizes from section [ smainresults ] , the process stores the size of every percolation cluster of . both and the size of the cluster containing .but besides that , the two encodings are different . most importantly , if we look at some specific percolation cluster of and ask for the vertex to which the size of this cluster is attached in the process , we observe that its level does not merely depend on the total number of removed edges which separate the cluster from the vertex , but also on the order in which these edges were removed. to stress the difference in the encodings , call a percolation cluster encoded by some with a cluster of rank . in terms of our classification of clusters into generations from section [ smainresults ] , a cluster of generation with root node can be a cluster of rank ; the rank depends on the order in which the erased edges on the path from to were removed in the destruction process .conversely , a cluster of rank with root node can be a cluster of generation for , where dist denotes the graph distance on before the percolation was performed .figure illustrates the difference in the encoding by and , respectively .we tacitly assume that the tree of cluster sizes is defined in terms of the final state of a percolation on which is used to define .for example , the cluster is a cluster of rank , since the edge joining to its parent was the first edge from the path connecting to which was removed in the destruction process . on the other hand, is a cluster of generation , since it is disconnected from by two deleted edges in the final outcome of percolation .+ _ middle : the cluster encoding by .note that several orderings of edge removals give rise to the same tree ._ + _ right : the tree of cluster sizes defined in section [ smainresults ] ._ recall the description of from above .we now observe that conditionally on , the family is distributed as the sequence of the atoms of a poisson random measure on with intensity indeed , , and given , the image of the measure on by the map is on . since , we deduce from this characterization that the sequences and have the same distribution , which implies that the finite - dimensional limits of and agree ( under our normalizations ) .in fact , this already follows from our previous considerations : we have seen in the proof of corollary [ c1 ] that the largest non - root clusters are of generation , and every such cluster is necessarily a cluster of rank ( but not every cluster of rank is of generation , see cluster in figure ) . for higher levels in the trees and , the limits do however not agree .this comes from the fact that clusters of generation can represent clusters of a strictly lower rank . roughly speaking , if such a cluster has a size of order , it is visible in the limit under the encoding by , while it is not under the encoding by .the tree of components is related to the so - called _ cut - tree _ , which is defined in terms of a discrete - time destruction process , where edges are removed according to some order , for example a random uniform order .more specifically , the cut - tree is a rooted binary tree which encodes the destruction of a tree on a finite vertex set in the following way .the root vertex is given by the set .then , if the first edge is removed , splits into two subtrees with respective vertex sets and , and these vertex sets are attached as the two children to the root .the construction is then iterated in the natural way - if , for example , the next edge is removed from the subtree with vertex set , the latter splits into two vertex sets and , which are regarded as the two children of . in particular , the leaves of the cut - tree can be identified with the vertices of . unlike the tree of components , the cut - tree stores the vertex sets of the tree components and not merely their sizes .for example , in figure 4 the vertex sets of the tree components of the first generation , i.e. , , and ( in the order of their appearance ) , are represented by the vertices which are attached to the branch from the root to the leaf .the cut - tree has been analyzed for cayley trees and random recursive trees by bertoin in and , and then by bertoin and miermont and dieuleveut for galton - watson trees .their results can be used to obtain limit theorems for the number of steps to isolate a certain family of nodes , and in a similar direction , we believe that the tree of components can prove helpful , too . 99 wiley , third edition ( 2008 ) .( 2010 ) , 678 - 697 .( 2012 ) , 909 - 921 .( 2013 ) , 603 - 611 .( 2014 ) , 1098 - 2418 . to appear in _ ann .henri poincar probab .( 2013 ) , 1469 - 1493 . to appear in _ ann ._ preprint ( 2013 ) . arxiv:1312.5525 . ._ random structures algorithms _ * 34 - 3 * ( 2009 ) , 319 - 336 . .new york , vienna ( 2009 ) .( 2007 ) , 28 - 35 .second edition .probability and its applications ( new york ) .springer - verlag , new york ( 2002 ) . ._ mathematical biosciences _ * 21 * ( 1974 ) , 173 - 181 .( 1977 ) , 509 - 520 .pitman , j. coalescent random forests ._ j. combin .theory ser .* 85*(2 ) ( 1999 ) , 165 - 193 .pitman , j. _ combinatorial stochastic processes .cole dt de probabilits de st . flour ._ lecture notes in mathematics * 1875 * , springer ( 2006 ) .* 213 * ( 1924 ) , 21 - 87 .
|
we study bernoulli bond percolation on a random recursive tree of size with percolation parameter converging to as tends to infinity . the sizes of the percolation clusters are naturally stored in a tree . we prove convergence in distribution of this tree to the genealogical tree of a continuous - state branching process in discrete time . as a corollary we obtain the asymptotic sizes of the largest and next largest percolation clusters , extending thereby a recent work of bertoin which deals with cluster sizes in the supercritical regime . in a second part , we show that the same limit tree appears in the study of the tree components which emerge from a continuous - time destruction of a random recursive tree . we comment on the connection to our first result on bernoulli bond percolation . * key words : * random recursive tree , percolation , cluster sizes , destruction . 60k35 ; 05c05 .
|
simple stochastic games ( ssgs ) are played by two players called and in a sequence of steps .the players move a pebble along the edges of a directed graph whose vertices are partionned into three sets : , , and .when the pebble is on a vertex of or , the corresponding player chooses an outgoing edge and moves the pebble along it . when the pebble is on a vertex of ( a _ random _ vertex ) , the outgoing edgeis chosen randomly according to a fixed probability distribution .the players have opposite goals , as wants to reach a special sink vertex while wants to avoid it forever .an example of ssg is depicted in figure [ fig : examplessg ] , with vertices of represented as s , vertices of represented as s , and vertices of represented as s .ssgs are a natural model of reactive systems .consider , for example , a hardware component .it can be modelled as an ssg , whose vertices represent the global states of the component and the target is some error state to avoid .the nature of a given vertex depends on who can influence the immediate evolution of the system : it is a vertex if the software can choose between different options , a vertex if there is a ( non - deterministic ) input asked from the user , and a random vertex if the evolution depends on a stochastic environment .an optimal strategy for can then be used as the basis for the synthesis of a `` good '' driver , _ i.e._one which minimises the probability of entering the error state independently of the behaviour of the user .the main algorithmic problem about ssgs is the computation of values of the vertices and optimal strategies for the players .this problem was first adressed by condon , who showed that deciding whether the value of a vertex is greater than belongs to np and co - np .condon s algorithm guesses non - deterministically the values of vertices , which are rational numbers of linear size , and checks that they are solutions of some _ local optimality equations_. this algorithm is correct only for _ stopping _ games , where the pebble reaches either the target or a sink target with probability one , regardless of the players strategies .any ssg can be transformed in polynomial time into a stopping ssg with ( almost ) the same values , but it incurs a quadratic blow - up of the size of the game .three other algorithms for solving ssgs are presented in .the first one computes the values of the vertices using a quadratic program with linear constraints .the second one computes iteratively from below the values of the vertices , and the third is a strategy improvement algorithm _ la _ hoffman - karp .the two latter algorithms , as the ones recently proposed in , solve a series of linear programs which could be of exponential length . furthermore, solving a linear program requires high - precision arithmetic , even if it can be done in polynomial time .the best randomised algorithms achieve sub - exponential expected time . in this paperwe present two algorithms computing the values and optimal strategies in ssgs : the `` permutation - enumeration '' and the `` permutation - improvement '' algorithms .the common basis for both algorithms is that optimal strategies can be looked for in a subset of the positional strategies called _permutation strategies_. permutation strategies are derived from permutations over the random vertices . in order to find optimal strategies, the permutation - enumeration algorithm performs an exhaustive search among all permutation strategies , whereas the permutation - improvement algorithm performs successive improvements of permutation strategies , _ la _ hoffman - karp . the permutation - enumeration and the permutation - improvement algorithms share two advantages over existing algorithms .first , they perform much better on ssgs with few random vertices , as they run in polynomial time when the number of random vertices is logarithmic in the size of the game : it follows that the problem of solving ssgs is fixed - parameter tractable when the parameter is the number of random vertices .second , they do not rely on the transformation of the input ssg into a stopping ssg , which avoids the quadratic blow - up of the size of the game .moreover , the permutation - enumeration algorithm does not use linear or quadratic programming , ( it just computes the solutions to linear systems ) and its worst - case complexity is , where is the number of random vertices , is the number of edges and is the maximal bit - length of transition probabilities .the nominal complexity of the permutation - improvement algorithm is higher but we do not know any non - trivial lower bound for its complexity : the permutation - improvement algorithm may actually run in polynomial time .* outline . * in section [ sec : defs ] , we provide formal definitions for ssgs , values and optimal strategies .we describe then in section [ sec : playing ] the central notion of permutation strategies .section [ sec : optimality ] presents the permutation - enumeration algorithm , based on the _ self - consistency _ and _ liveness _ properties .section [ sec : heuristics ] introduces an improvement policy for permutations which leads to the permutation - improvement algorithm .a _ simple stochastic game _ is a tuple , where is a graph , is a partition of , and is a distinguished sink vertex in called the _target _ of the game .the transitions from the random vertices are equipped with probabilities described by the function ] .we will often use implicitly the following formulae which rule the probabilities and expectations once a finite prefix is fixed : ,\tau[h]}_{h_i}}(\gamma[h])\enspace ,\label{eq : pdecal}\\ { \mathbb{e}^{\sigma,\tau}_{v}\left[{\varphi}\mid v_0\dots v_i = h_0 \dots h_i\right ] } & = & { \mathbb{e}^{\sigma[h],\tau[h]}_{h_i}\left[{\varphi}[h]\right]}\enspace , \label{eq : edecal}\end{aligned}\ ] ] where (\rho_0\rho_1\ldots ) = \sigma(h_0\ldots h_{i-1 } \rho_0\rho_1\ldots) ] , ] are defined analogously . if we fix only s strategy and the initial vertex , the target vertex will be reached with probability at least : where is the event .starting from , player has strategies that guarantee a winning outcome with a probability greater than : minus for any .symmetrically , has strategies that guarantee a winning outcome with a probability less than : plus for any .it is clear that . in the case of ssgs ,stronger results are known : [ theo : pos ] let be a ssg . then , for any vertex , this common value is denoted by . furthermore , there are positional optimal strategies for both players , _ i.e._positional strategies and such that , for any strategies and : a ssg is _ normalised _ if the only vertex with value 1 is the target and there is only one ( sink ) vertex with value .our motivations for the introduction of this notion are twofold .first , several proofs are much simpler for normalised games .second , any ssg can be reduced to an equivalent normalised game in linear time and the resulting game is smaller than the original one .this reduction is presented on figure [ fig : normalisation ] : it simply consists in merging the region with value one into and the region with value zero into a new sink vertex .normalisation . ] in the remainder of this article , we assume that we are working on a normalised ssg , with random vertices .the existence of positional optimal strategies is a key property of ssgs and the cornerstone of many algorithms solving these games .the algorithms we propose rely on a refinement of this result : optimal strategies can be looked for among a subset of the positional strategies , the set of `` permutation strategies '' . as a matter of fact , theorem [ theo : pos ] is a corollary of results of the present paper .the proofs of our results often rely on the existence of values and optimal ( not only -optimal ) strategies in ssgs .this could be avoided the main point is to use instead of but we felt that it was not worth the extra complexity .the main intuition underlying permutation strategies is that the only meaningful events in a play are the visits to random vertices . between two visitsthe players only strive to impose which random vertex will be visited next , and the result of their interaction can be easily predicted .this is illustrated by figure [ fig : intuitions ] , which zooms on two details of figure [ fig : examplessg ] .coherence and contention . ] in the left part of figure [ fig : intuitions ] , must choose between the two random vertices and ( refusing to choose is not really an option ) .there is no reason to choose in one of the vertices , and in the other .we could consider only the strategies `` always go to '' and `` always go to '' .in the right part of figure [ fig : intuitions ] , we consider relationships between the two players strategies . from their respective vertices and , and can send the pebble to either or .we could restrict our attention to the cases where goes to one , and to the other .underlying these intuitions is the idea of a `` preference order '' over the random vertices . in the remainder of this article , we formalise it as a _ permutation _ : a one - to - one correspondance between and . for simplicity , we often write instead of and we consider the sink and target vertices as random vertices with the implicit assumption that they are respectively the lowest and greatest vertices : and . once a permutation has been fixed , the -strategies consist in trying to reach the highest ( with respect to ) possible random vertex , while tries to thwart her .notice that the situation is not exactly symmetric , since the burden of reaching a random vertex lies with : in case the pebble remains forever in controlled vertices then player wins .the formal definition of permutation strategies is based on the notion of _ deterministic attractor_. [ defi : detatt ] let be a set of vertices .the deterministic attractor of to , denoted by , is computed recursively : an attracting strategy to for is a positional strategy such that : symmetrically , a trapping strategy out of for is a positional strategy such that : the _ -regions _ associated with a permutation are defined as embedded deterministic attractors to the random vertices : & = & \{{\circledcirc}\ } \enspace , \\ \forall 1 \le i \le k , w_{{\mathbf{f}}}[i ] & = & { \operatorname{detatt}(\{{{\mathbf{f}}}_i,\ldots , { { \mathbf{f}}}_k,{\circledcirc}\ } ) } \setminus \bigcup_{j > i } w_{{\mathbf{f}}}[j]\enspace ,\\ w_{{\mathbf{f}}}[0 ] & = & \{{\otimes}\ } \enspace .\end{aligned}\ ] ] the _ -strategies _ and are strategies such that , on each ] .the following properties are easy to prove : if plays and plays from an initial vertex , the first random vertex reached by the pebble is the unique random vertex such that .figure [ fig : regions ] describes the -regions and -strategies of the game of figure [ fig : examplessg ] , for .-regions and -strategies in the game of figure [ fig : examplessg ] . ] when both players use their respective permutation strategies , the probability that a pebble starting in reaches is denoted by : [ prop : calcstrat ] let be a permutation .the -regions and the -strategies can be computed in time and the -values can be computed in time .the -regions and -strategies can be expressed in terms of _ deterministic _ games as they do not depend on what happens once a random vertex is reached .we can thus use the results of to compute them in time . in order to compute the -values ,we build a markov chain designed to mimic the behaviour of when the players use their -strategies .intuitively , we merge each region ] of are computed as follows .let be the set of vertices from which is reachable in .then , for each , , and is the unique solution of the following linear system : which can be solved in time .for each , .in this section we describe the permutation - enumeration algorithm which computes optimal strategies for both players .this algorithm relies on the following key property of permutation strategies .[ thm : vstrat1 ] in every ssg , there exists a permutation such that is optimal for and is optimal for .this theorem suggests a very simple enumerative algorithm computing values and optimal strategies : check for each permutation whether the -strategies are optimal .each test can be performed in polynomial time using linear programming .however , linear programming requires high - precision arithmetic and is expensive in practice .our permutation - enumeration algorithm uses a simpler criterion based on a refinement of theorem [ thm : vstrat1 ] : we look for permutations which are _ live _ and _ self - consistent_. the permutation - enumeration algorithm is based on two simple properties on permutations : self - consistency and liveness .self - consistency expresses the adequation between _ a priori _ preferences ( permutation ) and resulting values ( the -values ) .liveness stipulates that each random vertex has a positive probability to immediately lead to a better from s point of view region .[ defi : selfconsistency ] a permutation is self - consistent if : [ defi : liveness ] a permutation is live if : , { \delta({{\mathbf{f}}}_i)(v ) } > 0\enspace.\ ] ] as we show below , the -strategies associated with a live and self - consistent permutation are optimal and there is always such a permutation .the permutation - enumeration algorithm performs an exhaustive search for a live and self - consistent permutation .[ theo : algo2 ] the permutation - enumeration algorithm terminates and returns optimal strategies for and .its worst - case running time is .correctness and termination are proved in lemmas [ lem : correctness ] and [ lem : existence ] , respectively .the worst - case complexity follows from the fact that there are at most permutations and proposition [ prop : calcstrat ] . before we proceed with the proofs of the main lemmas ,let us make a case for liveness : figure [ fig : caseliveness ] shows that self - consistency is not enough to guarantee the optimality of the resulting strategies . in this excerpt from the game of figure [ fig : examplessg ] , s strategy in should be to send the pebble to , as could otherwise trap the play in .however , consider the permutation : sends the pebble from to to avoid ; sends the pebble from to to reach either or .we have thus . as a matter of fact , we have , so is self - consistent even though the -values are not the correct ones .liveness forbids this kind of gambits from .it replaces , in this aspect , the `` stopping '' hypothesis of condon . we first show that if a permutation is live and self - consistent , the -strategies are optimal ( lemma [ lem : correctness ] ) .we need two preliminary propositions .first , if is self - consistent and plays according to , the sequence is a submartingale and symmetrically if is self - consistent and plays according to the sequence is a supermartingale . [prop : martingale ] let be a self - consistent permutation .then , for any strategies and for and , vertex , and integer , } \geq { \varphi}_{{\mathbf{f}}}(v_i)\enspace,\\ \label{eq : supermartingale } & { \mathbb{e}^{\sigma,{\tau_{{\mathbf{f}}}}}_{v}\left[{\varphi}_{{\mathbf{f}}}(v_{i+1 } ) \mid v_0\dots v_i\right ] } \leq { \varphi}_{{\mathbf{f}}}(v_i)\enspace.\end{aligned}\ ] ] in order to prove ( ) , it is enough to show that for any finite play , } & \geq & { \varphi}_{{\mathbf{f}}}(h_i)\enspace .\label{eq : sub1 } \end{aligned}\ ] ] depending on the owner of , ( [ eq : sub1 ] ) follows from one of the following properties of : the equations ( [ eq : ttt2 ] ) and ( [ eq : ttt1 ] ) follows from the definition of , and ( [ eq : ttt5 ] ) follows from the self - consistency of : by definition of the -regions , if and then ( see ) , so .the proof of is similar and we do not detail it .second , we show a `` stopping property '' in the case where is live and plays .[ prop : live ] let be a live permutation . then , for any strategy for and initial vertex , by definition of liveness , } { \delta({{\mathbf{f}}}_i)(w)}\ ] ] is positive .let and then the definition of yields : or , since and are sinks : equation ( [ eq : pdecal ] ) yields : hence hence proposition [ prop : live ] .we can now prove the correctness of the permutation - enumeration algorithm : [ lem : correctness ] let be a live and self - consistent permutation .then is optimal for and is optimal for .we first prove that ensures that a pebble starting from has probability at least to reach : }\label{eq : co2}\\ & = & \lim_{i\in{\mathbb{n } } } { \mathbb{e}^{{\sigma_{{\mathbf{f}}}},\tau}_{v}\left[{\varphi}_{{\mathbf{f}}}(v_i)\right]}\label{eq : co3}\\ & \ge & { \mathbb{e}^{{\sigma_{{\mathbf{f}}}},\tau}_{v}\left[{\varphi}_{{\mathbf{f}}}(v_0)\right ] } = { \varphi}_{{\mathbf{f}}}(v)\label{eq : co4}\enspace,\end{aligned}\ ] ] where ( [ eq : co2 ] ) comes from proposition [ prop : live ] , ( [ eq : co3 ] ) is a property of expectations , and ( [ eq : co4 ] ) comes from proposition [ prop : martingale ] .then , we show that ensures that a pebble starting from has probability at most to reach : }\label{eq : co5}\\ & \le & \liminf_{i\in{\mathbb{n } } } { \mathbb{e}^{\sigma,{\tau_{{\mathbf{f}}}}}_{v}\left[{\varphi}_{{\mathbf{f}}}(v_i)\right]}\label{eq : co6}\\ & \le & { \mathbb{e}^{\sigma,{\tau_{{\mathbf{f}}}}}_{v}\left[{\varphi}_{{\mathbf{f}}}(v_0)\right ] } = { \varphi}_{{\mathbf{f}}}(v)\label{eq : co7}\enspace,\end{aligned}\ ] ] where ( [ eq : co5 ] ) holds because is a sink and , ( [ eq : co6 ] ) is a property of expectations , and ( [ eq : co7 ] ) comes from proposition [ prop : martingale ] . thus , for any strategies and for and , which completes the proof of lemma [ lem : correctness ] now we show the existence of a live and self - consistent permutation ( lemma [ lem : existence ] ) .our construction is based on proposition [ prop : rec ] and its correctness on proposition [ prop : valphi ] .[ prop : rec ] let be a set of vertices including the target vertex and be . then either or there is a random vertex in such that : be the set of vertices with maximal value in : and suppose that : let be a vertex in .as is normalised , we just need to show that , _ i.e._there is a strategy for such that for every strategy for , . by definition of , there is a positional strategy for such that , and it follows from the definition of that . as is also closed under random moves , a pebble starting in can only leave through a move of , which leads to as . we define now a non - positional strategy in which plays according to as long as the play remains in and switches definitively to an optimal strategy the first time the pebble moves out of .we can thus partition the plays starting in and consistent with and depending on if and where the play gets out of : is the set of plays remaining forever in , and for each in , is the set of plays where is the first visited vertex outside of .clearly and by definition of the strategy , , . hence since this holds for every , .by definition of this implies .[ prop : valphi ] let be a live permutation such that : then is self - consistent. note that under the same hypotheses , lemma [ lem : correctness ] imply that -strategies are optimal .we first show that : consider the strategy , which mimics until the first time the pebble reaches a random vertex and then switches definitively to an optimal strategy . by definition of ,the first random vertex belongs to , so ensures that a pebble starting in reaches with probability at least .a similar strategy for ensures that this probability is at most .so , and ( [ eq : ex1 ] ) follows .now we prove that and coincide . according to and by definition of permutation strategies , so , if and play according to their -strategies , the sequence is a martingale : } & = & \operatorname{val}(v_i)\enspace .\label{eq : ex2 } \end{aligned}\ ] ] consequenly , for every vertex , : }\label{eq : ex3}\\ & = & \lim_{i\in{\mathbb{n } } } { \mathbb{e}^{{\sigma_{{\mathbf{f}}}},{\tau_{{\mathbf{f}}}}}_{v}\left[\operatorname{val}(v_i)\right]}\label{eq : ex4}\\ & = & { \mathbb{e}^{{\sigma_{{\mathbf{f}}}},\tau}_{v}\left[\operatorname{val}(v_0)\right ] } = \operatorname{val}(v)\label{eq : ex5}\enspace , \end{aligned}\ ] ] where ( [ eq : ex3 ] ) comes from proposition [ prop : live ] , ( [ eq : ex4 ] ) is a property of expectations , and ( [ eq : ex5 ] ) comes from ( [ eq : ex2 ] ) .since and coincide , the hypothesis yields the self - consistency of .this completes the proof of proposition [ prop : valphi ] .[ lem : existence ] there exists a live and self - consistent permutation .we use iteratively proposition [ prop : rec ] in order to build a permutation such that , for every , 1 . ; 2 . . by construction is live and .proposition [ prop : valphi ] yields the self - consistency of , and lemma [ lem : existence ] follows .a drawback of the permutation - enumeration algorithm is that it considers each and every possible permutation of the random vertices , so is a lower bound for the worst - case complexity of this algorithm .strategy - improvement algorithms avoid such enumerations , instead these algorithms proceed by successive improvements of a strategy : information about sub - optimality of a strategy is used to determine a `` better '' strategy , which ensures convergence to an optimal strategy . in this section , we emulate this idea with a permutation - improvement algorithm . starting from an initial permutation , we would like to improve again and again until the permutation strategies and are optimal . to test optimalitywe check that is live and self - consistent ( see lemma [ lem : correctness ] ) . when is live but _not _ self - consistent we compute a new permutation which is live and``better '' than .a natural improvement policy consists in choosing consistent with the -values i.e. refines the pre - order induced by .unfortunately this is too nave : the corresponding algorithm does not always terminate , a counter - example is given by figure [ fig : naive ] .a counter - example for the nave improvement algorithm . ]if we start with the permutation , the -strategies are as follows : in goes to and in goes to .hence , the -values of vertices , , and are respectively , , and , so is not self - consistent .the next permutation is , and the following -strategies ensue : in goes to and in goes to .the -values of vertices , , and are respectively , , and , so is not self - consistent either. moreover , the next permutation is , so the nave algorithm oscillates endlessly between and , never reaching the correct permutation .the correct permutation - improvement policy is a bit more complex : given a live but not self - consistent permutation , we choose a permutation which is live and self - consistent in the one - player game ] in line [ line : improve ] relies on the computation of values of the one - player game ] in line [ line : improve ] is achieved in polynomial time in the following way .first , compute the values of the one - player game ] .according to proposition [ prop : subgame ] the game ] .let us compare briefly the permutation - enumeration and the permutation - improvement algorithms .each improvement step of the permutation - improvement algorithm requires the computation of values of a one - player ssg , which can be performed using linear programming .these values could be computed as well using a permutation - improvement policy or a strategy - improvement algorithm in order to avoid linear programming altogether .either way , we have to forfeit one of the advantages of the permutation - enumeration algorithm : the computational simplicity of its inner loop . on the other hand, we do not know any non - trivial lower bound on the number of loops in a run of the permutation - improvement algorithm : it may be polynomial . the correctness proof is based on the following two results .[ lem : doubleliveness ] let be a positional strategy for and be a permutation .if is live in ] , respectively . by definition , ] is the same attractor in ] , we get \subseteq \bigcup_{j > i}w_{{\mathbf{f}}}[j ] \enspace .\label{eq : double1}\ ] ] thus , the liveness of in follows from its liveness in ] is normalised . in the proof of proposition [ prop : live ] , we have shown the existence of a positive real number such that for any strategy for min and vertex , hence only has value in ] hence proposition [ prop : subgame ] follows . the value of a strategy is denoted and defined by : for proving termination of the permutation - improvement algorithm we prove that successive strategies chosen by the algorithm have greater and greater values .[ lem : progress ] let be a live permutation in and be a live and self - consistent permutation in ] .the self - consistency of in ] implies that the -strategy of player in ] hence and ( [ eq : coherence ] ) follows .consider now the sequence , where is the strategy where plays according to until the pebble has visited random vertices , and plays according to afterwards .in particular .we show that for every vertex the sequence is non - decreasing and that its limit is less than .since this will prove lemma .we first show by induction that for any integer , _ basis ( ) : _ we have to prove that values of are greater than values of .let be a vertex , be the index of the -region of in , and be the index of the -region of in ] the definition of the -regions gives and ( [ eq : coherence ] ) yields : if plays with , the definition of ensures that the first random vertex belongs to , so and yields : on the other hand we prove : let denote -values in ] starting from and consistent with a -strategy for in ] which yields the self - consistency of in ] hence the -values are equal in and $ ] and is also self - consistent in .we have presented two algorithms computing optimal strategies in simple stochastic games : the permutation - enumeration and the permutation - improvement algorithms . both of them rely on the existence of optimal permutation strategies .the permutation - enumeration algorithm simply tests every permutation until it finds a live and self - consistent one .the permutation - improvement algorithm uses a smarter policy in order to choose a `` better '' permutation in the next round , _ la _ hoffman - karp .the permutation - enumeration algorithm has exponential worst - case complexity but it is a witness that solving ssgs is fixed - parameter tractable when the parameter is the number of random vertices . the nominal complexity of the permutation - improvement algorithm is a bit higher but we do not know any non - trivial lower bound on the number of improvement steps : the permutation - improvement algorithm may actually run in polynomial time . *acknowledgements * we would like to thank marcin jurdziski for some fruitful discussions , the anonymous reviewers for several useful suggestions and julien cristau for his invaluable comments during the writing of the final version .anne condon .n algorithms for simple stochastic games . in _ advances in computational complexity theory _ , volume 13 of _ dimacs series in discrete mathematics and theoretical computer science _ , pages 5173 .american mathematical society , 1993 .
|
simple stochastic games are two - player zero - sum stochastic games with turn - based moves , perfect information , and reachability winning conditions . we present two new algorithms computing the values of simple stochastic games . both of them rely on the existence of optimal _ permutation strategies _ , a class of positional strategies derived from permutations of the random vertices . the `` permutation - enumeration '' algorithm performs an exhaustive search among these strategies , while the `` permutation - improvement '' algorithm is based on successive improvements , _ la _ hoffman - karp . our algorithms improve previously known algorithms in several aspects . first they run in polynomial time when the number of random vertices is fixed , so the problem of solving simple stochastic games is fixed - parameter tractable when the parameter is the number of random vertices . furthermore , our algorithms do not require the input game to be transformed into a stopping game . finally , the permutation - enumeration algorithm does not use linear programming , while the permutation - improvement algorithm may run in polynomial time .
|
many studies exist for the numerical solutions of the differential equations using splines .splines are piecewise functions which have certain continuity at the joint points given to set up the splines .the spline related numerical techniques mainly offer the economical computer code and easy computational calculations .thus they are preferable in forming the numerical methods . until now, polynomial splines have been extensively developed and used for approximation of curve and surfaces and finding solutions of the differential equations .the polynomial spline based algorithms have been found to be quite advantageous for finding solutions of the differential equations . because it has been demonstrated that they yield the lower cost and simplicity to write the program code .base of splines known as the b - splines is also widely used to build up the trial functions for numerical methods .the exponential spline is proposed to be more general form of these splines . in the approximation theory ,the exponential b - splines are shown to model the data which have sudden growth and decaywhereas polynomials are not appropriate due to having osculatory behavior . since some differential equations have steep solutions , the use of the exponential b - splines in the numerical methods may exhibit good solutions for differential equations .mccartin has introduced the exponential b - spline as a basis for the space of exponential splines . the exponential b - spline properties accord with those of polynomial b - splines such as smootness , compact support , positivity , recursion for derivatives .thus the exponential b - splines can be used as the trial function for the variational methods such as galerkin and collocation methods .the exponential b - spline based methods have been started to solve some differential equations : numerical solution of the singular perturbation problem is solved with a variant of exponential b - spline collocation method in the work , the cardinal exponential b - splines is used for solving the singularly perturbed problems , the exponential b - spline collocation method is built up for finding the numerical solutions of the self - adjoint singularly perturbed boundary value problems in the work , the numerical solutions of the convection - diffusion equation is obtained by using the exponential b - spline collocation method mohammadi . the collocation methods based on the exponential b - spline functions have been constructed to solve the differential equations . in this study , the exponential b - spline functionare used to set up the trial functions which are placed in place of the unknown variable of the differential equations for the galerkin finite element method .thus nonlinear rlw equation will be solved with the proposed method numerically .the rlw equation describes a large number of important physical phenomena , such as shallow waters and plasma waves .therefore it plays a major role in the study of nonlinear dispersive waves . because of having limited analytical solutions , numerical analysis of the rlw equation has an importance in its study .various techniques have been developed to obtain the numerical solution of this nonlinear partial differential equation , some of which are finite difference methods , finite element methods alexander , serna , luo , saka2,saka3,dogan , dag2,dag3,saka , avilez , mei and spectral methods .the paper is outlined as follows . in section 2 , exponential b - splines and their some basic relationsare introduced . in section 3 , the application of the numerical method is given .the efficiency and the accuracy of the present method are investigated by using three numerical experiments related to propagation of single solitary wave , interaction of two solitary waves and wave generation .finally some remarks are concluded in the last section .in this study , we will consider the regularized long wave ( rlw ) equation is space coordinate , is time , is the wave amplitude and and are positive parameters .boundary and initial conditions of the eq.([1 ] ) are , u\left ( x,0\right ) = f\left ( x\right ) , x\in \left [ a , b\right ] . ] such that and let be the b - splines at the points of together with knots , outside the interval ] the can be defined as & \text { \ } & \text{if } x\in \left [ x_{i-2},x_{i-1}\right ] ; \\ a_{1}+b_{1}\left ( x_{i}-x\right ) + c_{1}e^{p\left ( x_{i}-x\right ) } + d_{1}e^{-p\left ( x_{i}-x\right ) } & \text { } & \text{if } x\in \left [ x_{i-1},x_{i}\right ] ;\\ a_{1}+b_{1}\left ( x - x_{i}\right ) + c_{1}e^{p\left ( x - x_{i}\right ) } + d_{1}e^{-p\left ( x - x_{i}\right ) } & \text { } & \text{if } x\in \left [ x_{i},x_{i+1}\right ] ; \\ b_{2}\left [ \left ( x - x_{i+2}\right ) -\dfrac{1}{p}\left ( \sinh \left ( p\left ( x - x_{i+2}\right ) \right ) \right ) \right ] & \text { } & \text{if } x\in \left [ x_{i+1},x_{i+2}\right ] ; \\ 0 & \text { } & \text{otherwise.}% \end{array}% \right .\label{3}\ ] ] where , \\ c_{1}=\dfrac{1}{4}\left [ \dfrac{e^{-ph}\left ( 1-c\right ) + s\left ( e^{-ph}-1\right ) } { \left ( phc - s\right ) \left ( 1-c\right ) } \right ] , \text { } % d_{1}=\dfrac{1}{4}\left [ \dfrac{e^{ph}\left ( c-1\right ) + s\left ( e^{ph}-1\right ) } { \left ( phc - s\right ) \left ( 1-c\right ) } \right]% \end{array}% ] .we seek an approximation to the analytical solution in terms for the exponential b - splines are time dependent unknown to be determined from the boundary conditions and galerkin approach to the equation ( [ 1 ] ) .the approximate solution and their derivatives at the knots can be found from the eq .( [ 3]-[4 ] ) as applying the galerkin method to the rlw equation with the exponential b - splines as weight function over the element ] can be written as where quantities are element parameters and are known as the element shape functions .the contribution of the integral equation ( [ 6 ] ) over the sample interval ] is determined from the eq.([15 ] ) .we have carried out three test problems to demonstrate the given algorithm .accuracy of the method is measured by the error norm: rlw equation satisfy the following conservation laws which are corresponding to mass , momentum and energy : in numerical calculations , the conservation laws are calculated by use of the trapezoidal rule and the determination of in the exponential b - spline is made by experimentally .the exact solution of rlw equation is given in as follows: this form of the solution is known as a single solitary wave with the amplitude and the velocity .the initial condition is obtained by taking in eq.([exact])*. * we have used boundary conditions values of the parameters seen in the above equations as these parameters and the mentioned initial condition , the solitary wave moves across the interval in time period similar with some early studies , space step andtime step are used in numerical calculations . in this test problem , the determined by scanning the interval ] with the increment **. * * the solution profiles are illustrated in figure [ fig1-fig1b ] at selected times .it is clear from this figure that the peak of the solitary wave remain kept during the running time .the distribution of absolute error at for amplitude and is given in figure [ fig2-fig3 ] , respectively .the maximum error for * ebsgm * occurs at the right hand boundary seen in figure fig2-fig3 .we believe that this error arises due to magnitude of the wave and the physical boundary conditions to fit and .if we extend the solution interval from ] error norm is seen to reduce from . to at time the absolute error norms and the values of the conservation invariants are recorded in table 2 and 3 for different amplitudes . to make a comparison with some early studies ,the maximum errors with the conservation invariants are presented in table 4 and 5 . according to this tables , *ebsgm * is more accurate method than the some others .the values of the conservation invariants at different times remain fairly the same when compared with the analytical invariants for amplitude when we take the amplitude as , the value of the conservation invariant has some minor difference than the analytical value of it , whereas and are fairly same at different times. in this section , we will study the interaction of two solitary waves having different amplitudes and moving in the same direction . the initial condition is the following parameters are chosen to coincide values in the literature: , these parameters yields the solitary waves with the amplitudes and positioning around points and respectively .the computation is carried out up to time with time step over the finite interval $ ] and is selected as for * ebsgm .* numerical solutions of at various times are depicted in figure [ fig4 ] , and the initial solution has been propagated rightward .it is seen from the figure [ fig4 ] that the solitary waves are subjected to a collision about time and after the interaction they propagate with their original amplitudes to the right seeing at time , and . ]the conservation invariants are presented at some selected times in table 6 . according to this table , during the interaction there are some changes at the conservation constants and whereas the constant remain nearly same . an applied force like an introduction of fluid mass , an action of some mechanical device , to a free surface , will induce waves . in this numerical experiment, we take following boundary condition to generate waves with the rlw equation. is studied to generate waves .this forced boundary condition known as a wave maker at one end .the parameters are chosen to make a comparison with earlier works over the region . is selected as for * ebsgm .* during the run time of the algorithm , five solitary waves are produced .although first four waves have reached amplitudes larger than forcing amplitudes , the last one is less than that of the forcing one .when forcing is switched off , the last wave has not enough time to evolve .subsequently , no new wave are born .a view of travelling solitary wave is presented at time in figure fig5 .amplitudes of solitary waves versus time are depicted in figure fig6 . at various time , amplitudes of the solitary waves and the conservation constants are demonstrated in table 7 for * ebsgm .* in addition to this , other amplitudes of the solitary waves which are reduced the other studies are shown in table 7 for time .our results are in conformity with that of studies . and amplitude at time , , , . ]in this paper , we investigate the utility of the exponential b - spline algorithm for solving the rlw equation .the efficiency of the method is tested on the propagation of the single solitary wave , the interaction of two solitary waves and wave generation . to see the accuracy of the method , error norm and conservation quantities and are documented based on the obtained results .exponential b - spline based method gives accurate and reliable results for solving the rlw equation .for the first test problem , * ebsgm * leads to more accurate results than the collocation - based method but similar results with the some galerkin methods . in the second test problem, there is no exact solution therefore simulation is shown graphically and the conservation quantities are tabulated .generation of waves by using variable boundary conditions at left has been achieved and wave profiles and their amplitudes are documented . in conclude ,the numerical algorithm in which the exponential b - spline functions are used , performs well compared with other existing numerical methods for the solution of rlw equation .the author , melis zorahin grgl , is grateful to the scientific and technological research council of turkey for granting scholarship for phd studies and all of the authors are grateful to the scientific and technological research council of turkey for financial support for their project .
|
in this paper , the exponential b - spline functions are used for the numerical solution of the rlw equation . three numerical examples related to propagation of single solitary wave , interaction of two solitary waves and wave generation are employed to illustrate the accuracy and the efficiency of the method . obtained results are compared with some early studies . * keywords : * exponential b - spline ; galerkin method ; rlw equation .
|
energy transfer ( wet ) has attracted significant interests recently for delivering energy to electrical devices over the air .generally , wet can be implemented by inductive coupling via magnetic field induction , magnetic resonant coupling based on the principle of resonant coupling , or electromagnetic ( em ) radiation .the different types of wet techniques in practice have their respective advantages and disadvantages ( see e.g. and the references therein ) .for example , inductive coupling and magnetic resonant coupling both have high energy transfer efficiency for short - range ( e.g. , several centimeters ) and mid - range ( say , a couple of meters ) applications , respectively ; however , it is difficult to apply them to charge freely located devices simultaneously .in contrast , em radiation based far - field wet , particularly over the radio frequency ( rf ) bands , is applicable for much longer range ( up to tens of meters ) applications and also capable of charging multiple devices even when they are moving by exploiting the broadcast nature of rf signal propagation ; whereas its energy transfer efficiency may fall rapidly over distance .rf signal enabled wet is anticipated to have abundant applications in providing cost - effective and perpetual energy supplies to energy - constrained wireless networks such as sensor networks in future .in fact , applying rf - based wet in various types of wireless communication networks has been extensively studied in the literature recently . in general, there are two main lines of research that have been pursued , namely simultaneous wireless information and power transfer ( swipt ) ( see e.g. ) and wireless powered communications ( wpc ) ( see e.g. ) , where the information transmission in the network is in the same or opposite direction of the wet , respectively . for both swipt and wpc ,how to optimize the energy transfer efficiency from the energy transmitter ( et ) to one or more energy receivers ( ers ) by combating the severe signal power loss over distance is a challenging problem . to efficiently solve this problem , multi - antenna or multiple - input multiple - output ( mimo ) techniques , which have been successfully applied in wireless communication systems to improve the information transmission rate and reliability over wireless channels ,were also proposed for wet .specifically , deploying multiple antennas at the et enables focusing the transmitted energy to destined ers via beamforming , while equipping multiple antennas at each er increases the effective aperture area , both leading to improved end - to - end energy transfer efficiency . for the point - to - point mimo wet system , it has been shown in that energy beamforming is optimal to maximize the energy transfer efficiency by transmitting with only one single energy beam at the et , which is in sharp contrast to the celebrated spatial multiplexing technique used in the point - to - point mimo communication system which applies multiple beams to maximize the information transmission rate .= 1 in practical systems , the benefit of energy beamforming in mimo wet crucially relies on the availability of the channel state information ( csi ) at the et .however , acquiring such csi is particularly challenging in wet systems , since existing methods for channel learning in wireless communication ( see e.g. and the references therein ) may be no longer applicable .for example , one well - known solution to acquire the csi at the transmitter in conventional wireless communication is by estimating the reverse link channel based on the training signals sent by the receiver .however , this method only applies to systems operating in time - division duplex ( tdd ) and critically depends on the accuracy of the assumption made on the reciprocity between the forward and reverse link channels .furthermore , applying this method to wet systems requires a more careful design of energy - efficient training signals at the er , since they consume part of the er s energy that is harvested from the et .alternatively , another commonly adopted solution to obtain csi at the transmitter in wireless communication is by sending training signals from the transmitter to the receiver , through which the receiver can estimate the channel and then send the estimated channel back to the transmitter via a feedback channel .this method applies to both tdd and frequency - division duplex ( fdd ) based systems ; however , it requires complex baseband signal processing at the receiver for channel estimation and feedback , which may not be implementable at the er in wet system due to its practical hardware limitation . shows a commonly used er design for wet , in which each receive antenna ( also known as rectenna ) first converts the received rf signal to a direct current ( dc ) signal via a rectifier , and then the dc signals from all receive antennas are combined to charge a battery .evidently , it is difficult in this er design to incorporate baseband signal processing for channel estimation . ] to overcome the above drawbacks of existing methods , it is desirable to investigate new channel learning and feedback schemes for mimo wet systems by taking into account the hardware limitation at each er , which motivates this work . in this paper , we consider a multiuser mimo system for wet as shown in fig .[ fig : system ] , where one et with transmit antennas broadcasts wireless energy to a group of ers each with receive antennas via transmit energy beamforming over a given frequency band .we assume that the ers can send their feedback information to the et perfectly over orthogonal feedback channels ( by e.g. piggybacking the feedback information with their uplink data in a wireless powered sensor network ) .under this system setup , we propose a two - phase transmission protocol for channel learning and energy transmission , respectively . in the channel learning phase, the et aims to learn the mimo channels to different ers by adjusting the training signals according to the individual feedback information from each er . based on the estimated channels , in the energy transmission phase , the et then designs optimal transmit energy beamforming to maximize the weighted sum - power transferred to all ers .in particular , we propose a new channel learning algorithm that requires only one feedback bit from each er per feedback interval .specifically , each feedback bit indicates the increase or decrease of the harvested energy at each er in the present versus the previous intervals , which can be practically measured at each er without changing the existing energy harvesting circuits as shown in fig .[ fig : system ] ( by e.g. connecting an `` energy meter '' at the sum output of the dc signals from different receive antennas ) . based on such feedback information, the et adjusts its transmitted training signals in subsequent feedback intervals during the channel learning phase and at the same time obtains improved estimates of the mimo channels to different ers .it is worth noting that there have been several alternative schemes reported in the literature for one - bit feedback based channel learning , e.g. , _cyclic jacobi technique ( cjt ) _ , _ gradient sign _ , and _ distributed beamforming _ , which have been proposed and studied in different application scenarios .specifically , the cjt algorithm was proposed for the secondary transmitter ( st ) to learn its interference channel to the primary receiver ( pr ) in a mimo cognitive radio system , in which the st adjusts its transmitted signals over consecutive time slots based on the one - bit information indicating the increase or decrease of its resulted interference power at the pr , which is extracted from the feedback signals of the pr .the gradient sign algorithm was proposed to estimate the dominant eigenmode of a point - to - point mimo channel , in which the transmitter obtains a one - bit feedback from the receiver per time slot , which indicates the increase or decrease of the signal - to - noise - ratio ( snr ) at the receiver over two consecutive slots .the distributed beamforming algorithm was proposed to learn the channel phases in a system consisting of multiple distributed single - antenna transmitters simultaneously sending a common message to a single - antenna receiver , where each transmitter updates its own signal phase in a distributed manner by using the one - bit feedback from the receiver indicating whether the current snr is larger or smaller than its recorded highest snr so far .notice that the above three algorithms can all be applied to one - bit feedback based channel learning in the mimo wet system of our interest ; however , these methods have the common limitation that they can only be used to learn the eigenvectors or the dominant eigenmode of a single - user mimo channel matrix at each time , instead of learning multiple users mimo channels exactly at the same time . as a result, they may not achieve the optimal energy transfer efficiency in the multiuser mimo wet system based on one - bit feedback . in this paper, we propose a new approach to design the one - bit feedback based mimo channel learning for wet by applying the celebrated analytic center cutting plane method ( accpm ) in convex optimization . to the authors best knowledge, this paper is the first attempt to apply the accpm approach for the design of channel learning with one - bit feedback . for our proposed accpm based channel learning algorithm, we first provide an analysis for its worst - case convergence .it is shown that the accpm based channel learning can obtain the estimates of all mimo channels each with arbitrary number of receive antennas , , in at most number of feedback intervals with denoting a desired accuracy , and representing the ceiling function of real numbers . from this result , it is further inferred that when , the proposed algorithm has the same analytic convergence performance regardless of the number of ers , , which shows its benefit of simultaneously learning multiuser mimo channels . finally , we compare the performance of our proposed channel learning algorithm against the aforementioned three benchmark algorithms in terms of both convergence speed and energy transfer efficiency .it is shown through extensive simulations that our proposed algorithm achieves faster convergence for channel learning as well as higher energy transfer efficiency than the other three algorithms ; while the performance gain of our proposed algorithm becomes more significant as the number of ers in the wet system increases .the remainder of this paper is organized as follows .section [ sysmod ] introduces the system model and the two - phase transmission protocol .section [ sec : one - bit ] presents the proposed channel learning algorithm with one - bit feedback for the point - to - point or single - user mimo wet system as well as its convergence analysis .section [ sec : one - bit : multi ] extends the channel learning algorithm and analysis to the general multiuser wet system .section [ sec : numerical ] provides simulation results to evaluate the performance of our proposed algorithm as compared to other benchmark algorithms .finally , section [ sec : conclusion ] concludes the paper . _notation : _ boldface letters refer to vectors ( lower case ) or matrices ( upper case ) . for a square matrix , and denote its determinant and trace , respectively , while and mean that is positive semi - definite and negative semi - definite , respectively . for an arbitrary - size matrix , , , , and denote the frobenius norm , rank , conjugate transpose and transpose of , respectively . , , and denote an identity matrix , an all - zero matrix , and an all - one column vector , respectively , with appropriate dimensions . and denotes the space of complex and real matrices , respectively . denotes the statistical expectation . denotes the euclidean norm of a complex vector , and denotes the magnitude of a complex number . denotes the complex number .we consider a multiuser mimo broadcast system for wet as shown in fig .[ fig : system ] , where one et with transmit antennas delivers wireless energy to a group of ers , denoted by the set . for notational convenience ,each er is assumed to be deployed with the same number of receive antennas , while our results directly apply to the case when each er is with different number of antennas .we assume a quasi - static flat fading channel model , where the channel from the et to each er remains constant within each transmission block of our interest and may change from one block to another .we denote each block duration as , which is assumed to be sufficiently long for typical low - mobility wet applications .we consider linear transmit energy beamforming at the multiple - antenna et . without loss of generality, we assume that the et sends energy beams , where is our design parameter to be specified later . let the beamforming vector be denoted by and its carried energy - modulated signal by , .then the transmitted signal at et is given by since s do not carry any information , they can be assumed to be independent sequences from an arbitrary distribution with zero mean and unit variance , i.e. , .furthermore , we denote the transmit covariance matrix as .note that given any positive semi - definite matrix , the corresponding energy beams can be obtained from the eigenvalue decomposition ( evd ) of with .assume that the et has a transmit sum - power constraint over all transmit antennas ; then we have . with transmit energy beamforming , eacher can harvest the wireless energy carried by all energy beams from its receive antennas .denote as the mimo channel matrix from the et to er , and .then by letting denote the frobenius norm of the matrix , i.e. , , we obtain the normalized channel matrix from the et to er as ( or ) with .accordingly , the harvested energy at er over one block of interest is expressed as where denotes the energy harvesting efficiency at each receive antenna ( cf .[ fig : system ] ) .since is a constant , we normalize it as in the sequel of this paper unless otherwise specified .it is assumed that each er can not directly estimate the mimo channel ( or ) given its energy harvesting receiver structure ( cf .[ fig : system ] ) ; instead , it can measure its average harvested power over a certain period of time by simply connecting an `` energy meter '' at the combined dc signal output shown in fig . [fig : system ] .we aim to design the energy beams at the et to maximize the weighted sum - energy transferred to ers , i.e. , with given in ( [ eqn:1 ] ) , over each transmission block subject to a given transmit sum - power constraint , where denotes the energy weight for er with . in order to ensure certain fairness among different ers for wet ,it is desirable to assign higher energy weights to the ers more far apart from the et .accordingly , in this paper we set the energy weight to be proportional to the reciprocal of the channel power gain to the respective er , i.e. , as a result , the weighted sum - energy transferred to ers can be re - expressed as with and . as a result, we can formulate the weighted sum - energy maximization problem as it has been shown in that the optimal solution to ( [ eqn : problem : maxk1 ] ) is given by , which achieves the maximum value of , with and denoting the dominant eigenvalue and its corresponding eigenvector of , respectively . since , this solution implies that sending one energy beam ( i.e. , ) in the form of is optimal for our multiuser mimo wet system of interest .this solution is thus referred to as the _ optimal energy beamforming ( oeb ) _ for a given . here, implementing the oeb only requires the et to have the perfect knowledge of the normalized mimo channels , , but does not require its knowledge of the average channel gain s .s ( instead of setting them as in ( [ eqn : revision1 ] ) ) . in such cases, it may be necessary for the et to have an estimate of the average channel gain s for setting s . since s change slowly over time, they can be coarsely estimated in practice by e.g. measuring the received signal strength from each er in the reverse link , by assuming a weaker form of channel reciprocity . ]= 1 in order for the et to practically estimate the mimo channels , , we propose a transmission protocol for the multiuser mimo wet system as shown in fig .[ fig : protocol ] , which consists of two consecutive phases in each transmission block for the main purposes of channel learning and energy transmission , respectively .we explain these two phases of each transmission block in more detail as follows .the channel learning phase corresponds to the first amount of time in each block of duration , which is further divided into feedback intervals each of length , i.e. , . for convenience , we assume that is an integer denoting the total block length in number of feedback intervals . during this phase, the et transmits different training signals ( each specified by a corresponding transmit covariance matrix ) over feedback intervals .let the transmit covariance at the et in interval be denoted by .then the transferred energy to the er over the interval is given by .in the meanwhile , er measures its harvested energy amount and based on it feeds back one bit at the end of the interval , denoted by , to indicate whether the harvested energy in the interval is larger ( i.e. , ) or smaller ( i.e. , ) than that in the interval , . for the convenience of our analysislater , we set such that . more specifically , if , then ; while if , then . we also denote and equivalently for convenience .notice that the feedback interval should be designed considering the practical feedback link rate from each er to the et as well as the sensitivity of the energy meter at each er . for the purpose of exposition , we assume in this paper that s are all perfectly measured at corresponding ers , and thus s are all accurately determined at the ers and then sent back to the et without any error.s at the ers due to the rectifier noise and feedback errors in the received s at the et due to the imperfect reverse links from the ers , both of which result in inaccurate s at the et . it is thus interesting to investigate their effects on the performance of our proposed channel learning algorithm with one - bit feedback , which , however , are beyond the scope of this paper . ]furthermore , we assume that the consumed energy for sending the one - bit feedback s is negligible at each er as compared to its average harvested energy . at the end of the channel learning phase , by using the collected feedback bits from all ers , the et can obtain an estimate of the normalized mimo channel for each er , which is denoted by , .the details of training signal design and channel estimation at the et based on the one - bit feedback information from one or more ers will be given later in sections [ sec : one - bit ] and [ sec : one - bit : multi ] .the subsequent energy transmission phase in each block corresponds to the remaining amount of time .given the estimated s from the channel learning phase , we can obtain the estimate of as , and accordingly have the estimate of its dominant eigenvector as . then based on the principle of oeb , the et sets the ( rank - one ) transmit covariance in the energy transmission phase as .accordingly , the weighted sum - energy transferred to all ers during this phase is expressed as . combining the above two phases ,the total weighted sum - energy transferred to all the ers over one particular block is given by in ( [ eqn : qpro ] ) , we observe that if the estimated mimo channel s are all accurate with a given finite ( or ) , then it follows that . in this case, we can have by increasing the block duration , i.e. , or . however , given finite or ( which needs to be chosen to be smaller than the channel coherence time in practice ) , there is in general a trade - off in setting the time allocations , i.e. , versus , between the channel learning and energy transmission phases in order to maximize in ( [ eqn : qpro ] ) , as will be demonstrated latter by our numerical results in section [ sec : numerical ] . in the aboveproposed two - phase transmission protocol for multiuser mimo wet , the key challenge lies in the design of channel learning algorithms at the et to estimate the normalized mimo channel s based only on the one - bit feedbacks from different ers in the first channel learning phase , which is thus our focus of study in the rest of this paper . in the next two sections , we first present the channel learning algorithm for the special case of one single er to draw useful insights , and then extend the algorithm to the general case with multiple ersin this section , we consider the point - to - point or single - user mimo wet system with er . for notational convenience ,we remove the user subscript in this case , and thus denote the harvested energy amount and the feedback bit at each interval as and , respectively , . furthermore , since there is only one er in the system , we denote its channel power gain as , and its normalized channel matrix to be estimated as .we aim to propose a new channel learning algorithm for the et to estimate the single - user mimo channel based on the one - bit feedbacks from the er over training intervals in the channel learning phase .the proposed algorithm is based on the celebrated accpm in convex optimization . in the following ,we first introduce accpm , then present the accpm based channel learning algorithm with one - bit feedback , and finally provide its convergence analysis .accpm is an efficient localization and cutting plane method for solving general convex or quasi - convex optimization problems , with the goal of finding one feasible point in a convex target set , where can be the set of optimal solutions to the optimization problem .suppose that any point in the target set is known _ a priori _ to be contained in a convex set , i.e. , . is referred to as the initial working set .the basic idea of accpm is to query an _ oracle _ for localizing the target set through finding a sequence of convex working sets , denoted by at each iteration , we query the oracle at a point , where is chosen as the analytic center of the previous working set . if , then the algorithm ends .otherwise , the oracle returns a _ cutting plane _ , i.e. , and satisfying that which indicates that should lie in the half space of . after the querying , the working setis then updated as . by properly choosing the cutting plane in ( [ eqn : apccm:1 ] ) based on , we can have .therefore , the returned working set will be reduced and eventually approach the target set as .it is worth noting that given query point , if the cutting plane in ( [ eqn : apccm:1 ] ) contains , then it is referred to as a _neutral cutting plane _ ; if , i.e. , lies in the interior of the cut half space , then it is named a _ deep cutting plane _ ; otherwise , it is called as a _ shallow cutting plane_. for accpm , a deep or at least neutral cutting plane is required in each iteration . in this subsection , we present the proposed channel learning algorithm based on accpm .first , we define the target set for our problem of interest .recall that our goal is to obtain an estimate of the normalized channel matrix , which is equivalent to finding any positively scaled estimate of . as a result, we define the target set as , which contains all scaled matrices of satisfying that .since is known _ a priori _ , we have the initial convex working set as i.e. , .next , we show that the one - bit feedback s in the feedback intervals play the role of oracle in accpm for our problem , which return a sequence of working sets to help localize the target set .consider each feedback interval as one iteration .then , for any feedback interval , , it always holds that , and thus the one - bit feedback information is always , which does not contain any useful information for localizing the target set .] by querying the one - bit feedback , the et can obtain the following inequality for and ( recall that ) : which can be regarded as a cutting plane such that lies in the half space of . accordingly , by denoting , we can obtain the working set at interval by updating , or equivalently , is evident that . from ( [ learning:4 ] ) , we can obtain the analytic center of , denoted as , which is explicitly given by ) is complex , we use and in ( [ eqn:13 ] ) to compute the analytic centers , instead of and as used in for the case of real matrices . our new definition in ( [ eqn:13 ] ) will facilitate the convergence proof for the proposed algorithm ( see appendix [ appendix:1 ] ) . ] since the problem in ( [ eqn:13 ] ) can be shown to be convex , it can be solved by standard convex optimization techniques , e.g. , cvx . notice that is also the query point for the next feedback interval .up to now , we have obtained the query point at each interval , , and the cutting plane given by ( [ eqn : energybeam:0 ] ) for accpm . to complete our algorithm, we also need to ensure that the resulting cutting plane is at least neutral given .this is equivalent to constructing the transmit covariance s such that we find such s by setting for interval and the remaining intervals , where is a hermitian probing matrix that is neither positive nor negative semi - definite in general . with the above choice ,finding a pair of and to satisfy ( [ eqn : neutral ] ) is simplified to finding the probing matrix satisfying to find such for the interval , we define a vector operation that maps a complex hermitian matrix to a real vector , where all elements of are independent from each other , and for any given complex hermitian matrix . andthe real vector , , can be realized as follows .the first elements of consist of the diagonal elements of ( that are real ) , i.e. , {aa} ] denotes the element in the row and column of ; the next elements of are composed of the ( scaled ) real part of the upper ( or lower ) off - diagonal elements of , i.e. , {ab } + [ { \mbox{\boldmath{ } } } ] _ { ba}}{\sqrt{2}} x ] s , . ]accordingly , we can express and , where due to the one - to - one mapping of , finding is equivalent to finding that is orthogonal to .define a projection matrix .then we can express , where satisfies and .thus , can be any vector in the subspace spanned by .specifically , we set where is a randomly generated vector in order to make independently drawn from the subspace . with the obtained , we have the probing matrix , in general contains both positive and negative eigenvalues . as a result ,the update in ( [ eqn : a ] ) may not necessarily yield an that satisfies both and .nevertheless , by setting to be sufficiently smaller than , we can always find a and its resulting satisfying the above two conditions with only a few random trials . in this paper , we choose .] where denotes the inverse operation of . accordingly , that satisfies the neutral cutting plane in ( [ eqn : neutral ] ) is obtained .to summarize , we present the accpm based channel learning algorithm with one - bit feedback for the single - user case in table i as algorithm 1 .note that in step 3 ) of the algorithm , the iteration terminates after feedback intervals of the channel leaning phase , and in step 4 ) , the estimate of is set as the normalized matrix of the analytic center of , given by .accordingly , we can use the dominant eigenvector of as the corresponding oeb for the energy transmission phase in the single - user mimo wet system .1 ) * initialization : * set , , and .+ 2 ) * repeat : * * ; * the et transmits with ; * the er feeds back ( or ) if ( or otherwise ) ; * the et computes the query point given in ( [ eqn:13 ] ) ; * the et computes from ( [ learning:3 ] ) , obtains , and updates .\3 ) * until * .+ 4 ) the et estimates .+ for the accpm based channel learning algorithm given in table [ table2 ] , we proceed to analyze its convergence performance by assuming that ( and hence ) can be set to be arbitrarily large .we first have the following proposition . [ proposition:1 ]suppose that the target set admits certain estimation errors specified by the desired accuracy , i.e. , .then the updated s in the accpm based single - user channel learning algorithm will converge to a point in the target set with once the iteration index ( ) satisfies the following inequality : where is a constant , and the right - hand side in ( [ eqn : convergence ] ) is monotonically decreasing with .note that for the accpm based channel learning in algorithm 1 , each iteration of returns one neutral cutting plane ; as a result , the required iteration number in proposition [ proposition:1 ] is equivalent to the total number of required neutral cutting planes . based on this observation , proposition [ proposition:1 ] can be proved by borrowing the convergence analysis results of the accpm for semi - definite feasibility problems in , which shows the worst - case complexity on the total number of required neutral cutting planes given certain solution accuracy .however , only considers the case with real matrices , while our accpm based channel learning algorithm corresponds to the case involving complex matrices . to overcome this issue, we first find an equivalent real counterpart for the complex accpm based channel learning in algorithm 1 , and then prove proposition [ proposition:1 ] by showing the convergence behavior of the real counterpart algorithm based on the results in . the detailed proof is provided in appendix [ appendix:1 ] . in proposition[ proposition:1 ] , we have obtained the number of feedback intervals required for to converge in the target set subject to certain estimation errors , where can be an estimate of any scaled matrix of with . however , since our main objective is to estimate the normalized channel matrix , it is desirable to further provide the explicit number of required feedback intervals for ( the estimate of ) to converge .this is shown in the following proposition based on proposition [ proposition:1 ] .[ proposition:2 ] the accpm based single - user channel learning algorithm obtains an estimate for the normalized channel matrix with in at most number of feedback intervals .see appendix [ appendix : proof2 ] . from proposition[ proposition:2 ] , it is evident that the analytic convergence speed is only related to the number of transmit antennas , , but does not depend on the number of receive antennas , .this is intuitive , since our algorithm aims to learn the composite channel matrix of , which is of size .it is worth pointing out that the result in proposition [ proposition:2 ] provides merely a worst - case upper bound for the required number of feedback intervals , ; practically , the proposed algorithm can achieve the desired accuracy with much smaller number of feedback intervals , , as will be shown by our numerical results in section [ sec : numerical ] .in this section , we extend the accpm based single - user channel learning algorithm to the general multiuser mimo wet system with ers . in the following ,we first present the multiuser modification of the accpm based channel learning algorithm with one - bit feedback , and then provide its convergence analysis . in the multiuser case , we aim to implement accpm to learn the normalized channel matrices from the et to all ers , i.e. , , by using the collected one - bit feedback information from them . to this end , we need to define the corresponding target set , working sets and query points for each er , and also find a set of _ neutral _ cutting planes for all ers at each feedback interval . for each er ,similar to the single - user case , we define the target set as , and have the working sets as where the inequality of corresponds to a cutting plane obtained at the interval based on er s feedback of , . from ( [ learning:4:multik ] ), we can obtain the analytic center of ( also the query point for the next interval ) , given by thus , we have obtained the target set , working sets and query points for each er .now , to complete accpm , we also need to design the transmit covariance s to ensure that the cutting plane in ( [ eqn : energybeam:0:multik ] ) is neutral .that is , at interval , it is desirable for each er that note that given ers in the system , the transmit covariance matrix needs to satisfy equations in ( [ eqn : neutral : k ] ) for at the same time , in contrast to one single equation in the single - user case with .if , finding such an becomes infeasible , since in this case , ( [ eqn : neutral : k ] ) corresponds to a set of equations with ( real ) unknowns . to overcome this issue, we propose to group the ers into one or more subsets each consisting of no more than number of ers ; accordingly , at each feedback interval , the et only needs to ensure that the updated transmit covariance satisfies ( [ eqn : neutral : k ] ) for the ers in the corresponding subset , instead of all ers if .specifically , we divide the ers into subsets denoted by , , and , where and .accordingly , we also partition the feedback intervals in the channel learning phase of the two - phase protocol into subsets as shown in fig .[ fig : protocol : multiuser ] , which are given by , , and , where and .notice that for each partitioned subset of feedback intervals , , only ers in the corresponding user subset need to send their one - bit feedbacks to the et for learning their mimo channels ; accordingly , over the intervals in , the et obtains cutting planes in ( [ eqn : energybeam:0:multik ] ) only for the corresponding ers in .therefore , based on the above partitions , if , we need to slightly modify the working sets in ( [ learning:4:multik ] ) and the analytic centers ( query points ) in ( [ eqn:13:multik ] ) for each as = 1 next , we design s such that at any feedback interval ( except ) , the equations in ( [ eqn : neutral : k ] ) hold for the subset of ers in , .we set for interval , and for the remaining intervals , where denotes the probing matrix for the multiuser case ( as opposed to in ( [ eqn : a ] ) for the single - user case ) to be designed such that , with . by denoting and ,then finding such a is equivalent to finding a vector that is orthogonal to _ all the vectors _ , , i.e. , .to do so , we define a real matrix denoted by with columns composed by the normalized vectors with , based on which we obtain a projection matrix .let , where satisfies and . then we can find by setting where is a randomly generated vector . with the obtained , we have the probing matrix , and accordingly obtain . in order to obtain that satisfies both and . ]1 ) * initialization : * set , , and ; divide the ers and feedback intervals into subsets .+ 2 ) * repeat : * * ; * the et transmits with ; * find the user subset index such that ; * each er feeds back ( or ) if ( or otherwise ) ; * the et computes the query points for all ers in subset , i.e. , s , , given in ( [ eqn:13:multik : modi ] ) ; * the et computes from ( [ learning:3:multi : k ] ) based on , obtains , and updates .\3 ) * until * .+ 4 ) the et computes from ( [ eqn:13:multik : modi ] ) and estimates .+ to summarize , we present the accpm based channel learning algorithm with one - bit feedback for the multiuser case in table ii as algorithm 2 .we provide the convergence analysis for the accpm based multiuser channel learning algorithm in the following proposition .[ proposition:2:multik ] the accpm based multiuser channel learning algorithm obtains mimo channel estimates for all ers , i.e. , , with in at most number of feedback intervals .given partitioned user subsets , , each with no more than ers , we consider any subset .according to ( [ eqn : neutral : k ] ) and ( [ learning:3:multi : k ] ) , at each feedback interval we can simultaneously find neutral cutting planes each for one er in . as a result , after number of feedback intervals , we will have neutral cutting planes for each er in .based on this argument together with proposition [ proposition:2 ] , it follows that in at most intervals we can have .given this result and considering that there are in total subsets of mimo channels to be estimated , proposition [ proposition:2:multik ] thus follows .proposition [ proposition:2:multik ] provides the worst - case convergence performance for arbitrary values of , and .note that if , it immediately follows from proposition [ proposition:2:multik ] that our proposed algorithm is able to learn ers mimo channels simultaneously without reducing the analytic convergence speed .= 1 in this section , we provide extensive simulation results to evaluate the performance of our proposed accpm based channel learning algorithm with one - bit feedback .we consider a multiuser broadcast system for wet as shown in fig .[ fig : simu_setup ] , where 6 ers are located at an equal distance of 5 meters from the et , but with different directions .accordingly , it is assumed that the average path loss from the et to all ers is identically 40 db .for the considered short transmission distance , the line - of - sight ( los ) signal is dominant , and thus the rician fading is used to model the channel from the et to each er .specifically , we have where is the los deterministic component , denotes the non - los rayleigh fading component with each element being an independent circularly symmetric complex gaussian ( cscg ) random variable with zero mean and covariance of ( to be consistent with the assumed average power attenuation of db ) , and is the rician factor set to be db . for the los component, we use the far - field uniform linear antenna array model with each row of expressed as ] , where only the signal phases are adjustable over different intervals of .we set as the initial energy beam .we also define ] , where is the step size that controls the algorithm accuracy and speed .next , consider the energy transmission phase .similar to the previous two cases , based on the estimated during the channel learning phase , the et applies the dominant eigenvector of as the energy beamforming vector for wet .j. xu and r. zhang , `` energy beamforming with one - bit feedback , '' in _ proc .ieee international conference on acoustics , speech , and signal processing ( icassp ) _ , pp .3513 - 3517 , may 2014 .l. xie , y. shi , y. t. hou , and w. lou , `` wireless power transfer and applications to sensor networks , '' _ ieee wireless commun .140 - 145 , aug . 2013 .a. nasir , x. zhou , s. durrani , and r. kennedy , `` relaying protocols for wireless energy harvesting and information processing , '' _ ieee trans .wireless commun .3622 - 3636 , jul . 2013 .d. j. love , r. w. heath jr ., v. k. n. lau , d. gesbert , b. d. rao , and m. andrews , `` an overview of limited feedback in wireless communication systems , '' _ ieee j. sel .areas commun .26 , no . 8 , pp . 1341 - 1365 , oct .2008 .b. gopalakrishnan and n. d. sidiropoulos , `` cognitive transmit beamforming from binary link quality feedback for point to point miso channels , '' in _ proc .ieee international conference on acoustics , speech , and signal processing ( icassp ) _ , pp .7293 - 7297 , may 2014 .
|
wireless energy transfer ( wet ) has attracted significant attention recently for delivering energy to electrical devices without the need of wires or power cables . in particular , the radio frequency ( rf ) signal enabled far - field wet is appealing to power energy - constrained wireless networks in a broadcast manner . to overcome the significant path loss over wireless channels , multi - antenna or multiple - input multiple - output ( mimo ) techniques have been proposed to enhance both the transmission efficiency and range for rf - based wet . however , in order to reap the large energy beamforming gain in mimo wet , acquiring the channel state information ( csi ) at the energy transmitter ( et ) is an essential task . this task is particularly challenging for wet systems , since existing channel training and feedback methods used for communication receivers may not be implementable at the energy receiver ( er ) due to its hardware limitation . to tackle this problem , we consider in this paper a multiuser mimo wet system , and propose a new channel learning method that requires only one feedback bit from each er to the et per feedback interval . specifically , each feedback bit indicates the increase or decrease of the harvested energy by each er in the present as compared to the previous intervals , which can be measured without changing the existing structure of the er . based on such feedback information , the et adjusts transmit beamforming in subsequent training intervals and at the same time obtains improved estimates of the mimo channels to different ers by applying an optimization technique called analytic center cutting plane method ( accpm ) . for the proposed accpm based channel learning algorithm , we analyze its worst - case convergence , from which it is revealed that the algorithm is able to estimate multiuser mimo channels at the same time without reducing the analytic convergence speed . furthermore , through extensive simulations , we show that the proposed algorithm outperforms existing one - bit feedback based channel learning schemes in terms of both convergence speed and energy transfer efficiency , especially when the number of ers becomes large . wireless energy transfer ( wet ) , multiple - input multiple - output ( mimo ) , energy beamforming , channel learning , one - bit feedback , analytic center cutting plane method ( accpm ) . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
|
understanding individual human mobility is of fundamental importance for many applications from urban planning to human and electronic virus prediction and traffic and population forecasting .recently effort has focused on the study of human mobility using new tracking technologies such as mobile phones , gps , wifi , and rfid devices .while these technologies have provided deep insights into human mobility dynamics , their ongoing use for tracking human mobility involves privacy concerns and data access restrictions .additionally , the use of call data records from cellular phones to track mobility provides low resolution data typically in the order of kilometres , dictated mainly by the distances between cellular towers .recently , large online systems have been proposed as proxies for providing valuable information on human dynamics .for example , the online social networking and microblogging system twitter , which allows registered users to send and read short text messages called tweets , consists of more than 500 million users posting 340 million tweets per day .users can opt to geotag their tweet with their current location , thus providing an ideal data source to study human mobility .geotagged tweets provide high position resolution down to 10 metres together with a large sample of the population , representing a unique opportunity for studying human mobility dynamics both with high position resolution and at large spatial scales . despite the data being publicly available and having a large population of users , its representativeness of the underlying mobility dynamics remain open questions . specifically , there are three open issues with using geotagged tweets for understanding mobility patterns : ( 1 ) potential sampling bias ; ( 2 ) communication modality ; and ( 3 ) location biases for sending tweets . as a social networking service ,the population of twitter users provides a specific sample of the population where people must have an internet connection , be relatively tech savvy , and thus typically represent a younger demographic group .while sampling bias is likely to be prevalent for any technology that captures mobility dynamics , it is unclear how twitter s potential sampling bias affects the mobility patterns of geotagged tweets .another challenge is that twitter , unlike previous technologies , strictly limits content length within one message . to use twitter as a proxy for studying human mobility , it is important to understand whether this hard limit on tweet content can impact the spatiotemporal patterns of geotagged tweets . finally , it is currently unclear whether twitter users send messages from specific types of locations ( such as the home or workplace ) , and how such preferences to send tweet messages from certain locations can impact the mobility patterns observed from geo - tagged tweets. this paper analyses a large dataset with tweets from twitter users from september 2013 to april 2014 in australia to determine how representative are twitter - based mobility patterns of population and individual - level movement .we compare the mobility patterns observed through twitter with the patterns observed through other technologies , such as call data records .our analysis uses universal indicators for characterising mobility patterns from geotagged tweets , namely the displacement distribution and gyration radius distribution that measures how far individuals typically moves ( their spatial orbit ) .we find that the higher resolution twitter data reveals multiple modes of human mobility from intra - site to metropolitan and inter - city movements .our analysis of the time and likelihood of returning to previously visited locations shows that the strict content limit on tweets does not affect the returning patterns , although twitter users exhibit higher preference than mobile phone users for returning to their most popular location .we also observe that an individual s spatial orbit strongly correlates with their mobility features , such as their likelihood and timing to return to their home location , suggesting that categorising people by their spatial orbit can improve the predictability of their movements .we notably find that short and long - distance movers both spend most of their time in large metropolitan area , in contrast with intermediate - distance movers movements , reflecting the impact of different modes of travel . in studying the predictability of next tweet location, we find evidence of two types of twitter users , mapping to a highly predictable group that appears to have strong spatial preference for tweeting , and a less predictable group for which twitter is a better proxy to capture mobility patterns .we first characterise the movement patterns of individuals by analysing their sequences of geotagged tweets . based on these sequences, we can study the moving distances for individual trips and over the long - term .the first important characteristic in human mobility patterns is the displacement distribution , namely spatial dispersal kernel , where is the distance between a user s two consecutive reported locations .figure [ fig:1](a ) shows that the displacement distribution is characterised by a heterogenous function with ] is approximated by a double power - law , indicating the intra - city or urban movements may be composed of two separate modes .( c ) - ( d ) distribution of gyration radius follows a remarkably similar pattern as , and is fitted by the two fitting schemes as done for .the details of these two fitting schemes and statistical validation are given in the supporting information s1 text . ] the heterogenous shape of for the entire interval can be hardly captured by a single commonly - used statistical function such as a power - law or an exponential using the approach of parametric fitting .indeed , we find that can be better approximated by a combination of multiple functions , indicating the presence of multi - modality in human mobility patterns. one scheme of fitting is to use a hybrid function which is a mixture of two individual functions , namely an exponential and a stretched - exponential . here , and are the parameters that control the shape of each individual function , is a coefficient that controls the mixture proportion , and is the minimum bound of the distribution .we find that , eq.([fit1 ] ) can fit the displacement distribution perfectly up to a cut - off ( the shaded area in figure [ fig:1](a ) ) , which may reflect the range limit of daily urban mobility such as commuting or shopping .therefore it is reasonable to argue that this part of the distribution , taking up about of the total displacements , approximately captures the population - level trend of urban movements in australia .the mixture function , with a significant inflection at , may account for two different modalities of urban movements : ( 1 ) intra - site movements ; ( 2 ) metropolitan movements .in particular , the first exponential function , which dominates for short displacements but declines dramatically as increases , represents the intra - site movements such as short relocations within a building .meanwhile , the stretched - exponential function that dominates for represents the metropolitan movements .the transition between these two modes at aligns well with the typical magnitude of site size .the noisy tail of the distribution beyond the urban mobility cut - off , accounting for about of the total displacements , can be roughly approximated by a power - law .this part may represent the long - distance or inter - city traveling mode .the humps in this part can be attributed to the sparse and concentrated population distribution as well as the scattered distribution of major metropolitan areas in australia .for example , the significant hump between 600km-1000 km is very likely due to the frequent traveling trips between large cities in australia .we discuss this peak in detail shortly which we attribute to intercity movement , a third mode in mobility , in section [ sec : pxy ] . the dominating stretched - exponential distribution for ] is to use a double power - law function shown in figure [ fig:1](b ) : p(d ) ~ d^-_1 & + d^-_2 & [ fit2 ] where and are the exponents for each individual power - law and is the separation point .this scheme suggests that the urban mobility mode can be also comprised of two separate modes characterised by two different power - laws , possibly capturing differences between short and long distance moves within a city . to explore the heterogeneity of mobility among individuals , we study the radius of gyration , which is another important characteristic of human mobility that quantifies the spatial stretch of an individual trajectory or the traveling scale of an individual .the radius of gyration for an individual can be calculated by , where is the individual s i - th location , is the geometric center of the trajectory and is the number of locations in the trajectory .the distribution of the radius of gyration over the whole population in figure [ fig:1](c ) has a similar shape as observed in the displacement distribution , which indicates that there is strong individual heterogeneity of traveling scale over the whole population . in other words, there may be geographical constraints that shape the movement depending on how far people move from their home location .indeed , it has been suggested that the distribution of should be asymptotically equivalent to , if each individual trajectory is formed by displacements randomly drawn from .the general shape of the distribution looks similar to some previously reported distributions in other regions such as santo domingo in the dominican republic , suggesting highly regional specific dynamics at play , such as population density and urban layout .however , our geotagged twitter data provides position resolution at up to 10 m , compared to typical resolutions of 1 km in previous studies , allowing more fine - grained validation of these dynamics . to gain better insight into the individual mobility patterns , we measure the first - passage time probability , i.e. the probability of finding a user at the same location after a period of , as shown in figure [ fig:2](a ) .we observe a periodic fluctuation at a 24-hour interval in , representing the home - return patterns in human mobility , similar to call data records .this confirms that the periodic patterns are indeed strongly present in human movement and are independent of communication medium proxy .the similarity of between geotagged tweets and call data records provides strong validation that the intrinsic differences between the two communication media do not significantly affect their observed return dynamics .each tweet is a short message of up to 140 characters , allowing people to communicate only one idea at a time .people may then send multiple consecutive tweets from the same location to convey a series of ideas .this is in contrast to phone calls where people can convey multiple ideas within a single call without hard constraints on content volume .from the perspective of using data traces of these two technologies as proxies for human mobility , locations from consecutive geotagged tweets are thus much more likely to be the same compared to locations from consecutive phone calls . despite this difference ,the first passage time patterns are similar .we are also interested in how well geotagged tweets can reflect the visitation preference of locations .we therefore measure the probability function of finding an individual at his / her -th most visited location . can be obtained by sorting an individual s visited locations in descending order of the visitation frequency , and visited locations can be identified by performing spatial clustering with a radius of 250 m on the raw data ( see methods and supporting information s1 text ) .as shown in figure [ fig:2](b ) , we observe a zipf s law of the visitation frequency , i.e. can be described by a power - law function .it has been recently suggested that the zipf s law of the visitation frequency in human mobility is rooted in the preferential return dynamics , i.e. humans have a tendency to return to the locations they visited frequently before . in this context , the exponent reflects the strength of the preferential return . in particular ,when , the individual returns to the visited locations with equal probability , and as increases , the individual has a stronger tendency of returning to more frequently visited locations .moreover , we observe that , the probability of finding an individual in his / her most frequently visited location ( or home location ) , ranges between 0.45 to 0.55 depending on the number of visited locations , which is significantly higher than the value observed in the mobile phone records .this finding indicates that people are likely to tweet in their most popular or home location at nearly half the time . this consistent higher likelihood to tweet from the home location is again most likely explained by difference in technology use .people are more likely to make a cellular phone call outside the home location , while tweeting appears to be more preferential at home .it is important that mobility patterns extracted from twitter consider this higher preference for the home location for representative models .other than this distinction , geotagged tweet data appears to align well with the preferential return patterns observed previously across large populations . to better understand the individual mobility pattern and its correlation with tweeting behaviour , i.e. where and when does a user tweet , we study the randomness and predictability of the sequence of tweeting locations for each user . to this end , we measure the entropy of each individual sequence of tweeting locations , which is a common approach to capture the randomness and predictability of time - series data . herewe consider two typical entropy measures ( see methods for more details ) : ( 1 ) the unconditional entropy ( shannon entropy ) that merely measures the randomness based on the occurring frequency of each distinct tweeting location ; and ( 2 ) the real entropy that evaluates the randomness based on the full information of the sequence , i.e. not only the occurring frequency of each distinct location but also the order of locations , capturing the intrinsic spatiotemporal correlation of individual mobility as well as tweeting behaviour . in generalwe have we first determine and for users with at least tweets ( users ) and study their distribution and across the user population , with the results shown in figure [ fig : tweet_predict ] .one simple quantity which is closely related to the entropy and predictability of an individual location sequence is the number of distinct locations in that sequence .the median value of both and for users with different values of approximately follow a linearly increasing trend as increases ( panel e ) , as a higher diversity of tweeting locations is more likely to have a higher order of randomness as well. however , increases at a faster rate , which indicates that the gap between the gain of randomness based on the occurring frequency and the full information enlarges as increases , i.e. in terms of reducing the uncertainty of the user s next tweeting location , the advantage of using full historical information compared to using occurring frequency only becomes more prominent for larger .the distributions and are obtained for four user groups with a minimum number of distinct tweeting locations respectively ( panel a and c ) . for small values of ( e.g. n = 1 , 10 ) ,both and appear to be bell - shape and peak at a center value . as increases , shifts to a larger value andthe overall distribution becomes broader .the more striking result is the emergence of a bimodal distribution for when exceeds , with the second peak becoming more pronounced for .this result may suggest the existence of two distinct types of twitter users : a group that maintains low randomness and high regularity ( low entropy and high predictability ) in their tweeting location patterns despite high diversity of tweeting locations ; and another group that exhibits high randomness and low regularity in their tweeting location patterns which are consistent with their high diversity of tweeting locations .this bimodal distribution can be rooted in the users real spatial mobility , or the correlation between the tweeting preference and the mobility , keeping in mind that the sequence of tweeting locations is only a subset of the user s real visited locations . in terms of the latter cause ,the mobility patterns of the first group are more likely attributed to their tweeting location preference , and are not necessarily representative of their full mobility patterns . for the second group ,the breadth of tweeting locations along with a comparably high entropy suggests that their tweeting locations are less likely to be correlated with the tweeting preference and therefore appear to be a more representative indicator of their real mobility patterns .another important measure for predictability is the maximum bound of probability ( or the maximum predictability ) that an appropriate prediction algorithm can predict the user s next location .for instance , a user with a maximum predictability has at most 40% of his / her tweeting locations that can be predicted , while at least 60% of his / her tweeting locations appear to be random and unpredictable . can be estimated from the entropy using fano s inequality ( see methods ) .the trends of and as a function of ( panel f ) and the distributions and ( panel b and d ) clearly mirror the results observed in the entropy , with a lower predictability evident for a larger number of tweeting locations , and a bimodal distribution of predictability for or more tweeting locations .the decrease in predictability for higher , similarly , is much slower for compared to , highlighting the importance of spatiotemporal correlations in visitation patterns in predicting future tweet locations .( a ) and ( c ) expectedly increase with the minimum number of tweet locations .a bimodal distribution in emerges from n=20 and becomes more pronounced at n=25 pointing to a group with very low randomness and high predictability and another group among users with highly diverse tweeting locations and lower predictability ( b ) .the rates of entropy increase ( e ) and predictability decrease ( f ) are significantly lower for compared to . ]next , we analyse the capacity of geotagged tweets to capture the spatial orbit of movement for groups of the population defined according to their radius of gyration .we explore the probability density function , i.e. the probability that a user is observed at location in its intrinsic reference frame ( see for details ) .we measure this function for user groups of different radius of gyration , as shown in figure [ fig : pxy ] .we use the isotropy ratio , where is the standard deviation of along the y - axis and is the standard deviation of along the x - axis , to characterise the orbit of each group . at very short to 4 km , the isotropy ratio slightly increases ( see figure [ fig:4](a ) ) .as increases furthers , we observe an increase in anisotropy in as in ; however , this correlation between increased anisotropy and is only valid for shorter distances between 4 km to 200 km , which maps well to typical distances for the use of cars as a transport mode .movement patterns become more diffusive ( isotropic ) once again for between 200 km and 1000 km .in fact , we observe an unexpected steady rise in for distances between 200 km and 1000 km , where people typically consider modes of transport other than cars , such as trains or planes .the peak in is most likely a product of the population distribution in australia .the top 3 cities account for more than half the population , and the distances between the largest city and commercial capital ( sydney ) and the next two cities ( melbourne and brisbane ) are around 963 km and 1010 km respectively .this result suggests that frequent travellers among these cities are less directed and more diffusive in their movement within the cities , thus the higher isotropic ratio . for users with different radius of gyration : ( a ) 1 - 10 km ; ( b ) 10 - 100 km ; ( c ) 100 - 500 km ; ( d ) km . *directed motion initially increases with , mainly influenced by road usage , yet motion patterns become more diffusive for large distances with the increased use of air travel . ]longer distance movers in figures [ fig : pxy](c ) and ( d ) appear to have a more stretched component in the negative x - axis , with an expanding gap close to the origin as increases . to shed further light on this effect , figure [ fig : tweet_distribution ]compares the spatial distribution of tweets for the four categories of , focusing on the southeast region that includes more than half the country s population ( see supporting information s1 text for full maps of australia ) .it shows that for smaller , tweets are concentrated in clusters mainly in large cities , or other regional areas .for intermediate , we observe a much stronger tendency of tweets to be within an expanded region around key cities and along main roads connecting large cities .the tight coupling of movement at these distances with road usage explains the increased directivity ( and anisotropy ) of motion for these categories .the tweet distribution for large shows a completely different trend , with a renewed focus of tweet activity in and closely around the main cities .this difference likely stems from the change in mode of travel to airplanes , where people fly in to a destination with the intention of remaining within a limited orbit around this destination .these long distance movers appear to have a few target destinations , such as airports at key cities or locations of interest . as we average the movement patterns over a population , the dominant movement distances for each individual may vary widely , contributing the stretch of this tail along the negative x - axis in figures [ fig : pxy](c ) and ( d ) .the gap that appears on the negative x - axis close to the origin arises from the use of air travel , where people do not tweet between source and destination as they travel long - distances .these patterns may also be related to the sparse and concentrated population in australia with heavy concentration of people ( and likely twitter users ) at major population centres .another likely cause of this pattern is the fly - in / fly - out worker phenomenon , where workers in the mining sector stay at remote sites during weekdays and then return home or travel to holiday destinations in southeast asia during weekends .it is also likely that the increased isotropy for distances from 200 km up to 1000 km is due to long distance movers circulating in the vicinity of their destination away from home , given the high cost associated with returning to their home location .these long distance travellers may be sending tweet messages mostly from their main destination ( such as the arrival airport or a remote work site ) , and less frequently tweeting from satellite locations. steadily decreases with before increasing again from 200 km to around 1000 km , indicating that popular intercity trips contribute to increased isotropy . *( b ) the probability of return to the most popular location and more generally the preference for previously visited locations both drop significantly with increasing . ]the maps focus on the southeast region in australia that accounts for nearly half the population .tweet activity for and is mainly concentrated in large cities , while tweets for intermediate extend further along main highways and other regions between cities . ]given this disparity in movement patterns for different , we revisit the preferential return to previously visited locations for people with different .we specifically focus on the probability of return to the most popular ( home location ) and the exponent of the zipf s law fit .the reader is pointed to supporting information s1 text for full details of the analysis .we observe that the preferential return for the top location steadily decreases with increasing ( figure [ fig:4](b ) ) .the results indicate that local movers are nearly 50% more likely to tweet from their preferred location as long distance movers , with ranging from up to 0.8 for to as low as 0.64 for .the slope of the preferential return curve is negatively correlated with .we observe a monotonic decrease in for increasing , further highlighting that preferential return weakens with longer , not only for the most preferred location , but also for the general set of visited locations .the likely reason for the weaker preferential return for long distance movers is the high social cost they incur for moving , which reduces the perceived value of frequently returning to specific previously visited locations .we have found that human mobility patterns extracted from geotagged tweets have similar overall features as observed in mobile phone records , which demonstrates that twitter is a suitable proxy for studying human mobility . however , marked differences are clear for tweet - based mobility data compared to other modalities .first , the higher resolution of twitter data has uncovered a heterogeneous distribution of displacement and radius of gyration that appear to map to different modes of movement , namely intra - site , metropolitan , and inter - city movement .secondly , we have identified two types of twitter users in terms of the predictability of tweeting locations , a group that is highly persistent and predicable in their tweet location probably as a consequence of how the use the technology , and another group that is much more diverse and less predictable in their tweet locations .it is for the latter group that we conjecture twitter captures representative mobility patterns of users given the reduced preference for tweeting from a few select locations. it would be interesting to further study the effect of tweeting behaviour on the subsampling of real mobility pattern using a secondary source that provides a ground - truth for a continuously tracking of the real mobility .thirdly , twitter data reveals unexpected dynamics in mobility particularly for long distance movers , who are more diffusive in their movement than intermediate distance movers , most likely as a reflection of a switch in transportation mode towards air travel and local circulation around destination cities .we have also found that the likelihood of tweeting from the home location and more generally the strength of preferential return are strongly dependent on a person s orbit of movement , with long distance movers less likely to return to previously visited locations .our results indicate that strict limit on tweet content does not appear to change the daily cycle in returning to the same location or the preferential return trends compared to call data records .overall , we find that population - level mobility patterns are well - represented by geo - tagged tweets , while individual - level patterns are more sensitive to contextual factors , such as the individual s degree of preference for tweeting from one or a few locations .our findings can be used for improved modelling of human movement and for better characterisation of twitter - based mobility patterns .for instance , epidemiologists modelling the risk of disease spread across a landscape can use our findings to create user movement profiles based on , where long distance movers tend to stay in and around big cities .this may narrow down the population of likely vectors for diseases that emerge in rural areas .our observations on preferential return can also be used for more fine - grained modelling of individual movement based on the user profile , as suggested in .these individual - based models can feed into not only disease spread forecasting , but also into the planning of communication and transportation networks .another implication of our work is for further studies in geography and demography .greater understanding of the mobility patterns of geotagged tweets can be used by geographers and demographers to model human movement and to understand the underlying drivers for people moving .coupled with tweet contents , these mobility dynamics can provide a useful tool for new methodologies in human geography studies , particularly for population projection . despite mobility being the main driver of population changes , current population projection methods rely on coarse - grained census data .twitter - based tracking provides high spatiotemporal data for more realistic mobility assumption for population projection models .the current study is limited to geotagged tweets , which only account for a small portion of tweets . to fully exploit the potential of twitter in human mobility analysis , an interesting direction for futurework is to apply the similar approach to tweets without geotags using location inference based on the tweet context .the insight from this article on mobility dynamics from tweets also lays the groundwork to generate synthetic mobility data at various spatial scales for analysis of disease spread , transport systems or communication networks .data have been collected through the public twitter stream api ( https://dev.twitter.com/overview/api ) , as part of the emergency situation awareness project .our dataset consists of tweets from different users in australia from sep . 1 2013 to march 31 2014 . in this dataset, each tweet record has a geotag and a timestamp indicating where and when the tweet was posted . based on this information we are able to construct a user s location history denoted by a sequence . the original location information provided by the geotag is denoted by latitude and longitude , and for convenience we project the locations in the epsg:3112 coordinate system .we first filter the dataset to exclude tweets that are not posted within australia , restricting our study in the domestic domain .we then filter users who have unrealistic moving patterns to reduce the noise and outliers of the dataset . in this study , if the displacement between two consecutive locations is not traveled at an usual velocity , i.e. , we consider the mobility pattern is unrealistic .the following study is then based on the filtered dataset which contains 4,171,225 tweets and 79,055 users .+ to measure the visitation frequency , we first need to identify locations . due to the spatial uncertainty , proximate locations in the raw data can represent an identical location of interest .therefore we use dbscan clustering to eliminate the vagueness , i.e. locations in the same cluster are considered to be one single location .the advantage of dbscan is that it can identify clusters of arbitrary shape . in particular, we use and in the dbscan clustering which represent the threshold distance and the minimum number of points to form a cluster respectively .the effect of changing the cluster size is investigated in supporting information s1 text .let the sequence of tweeting locations of user be , where each symbol denotes the user s tweeting location and is the number of distinct visited locations of user during the observation period .the unconditional entropy ( or shannon entropy ) of can be expressed as : where is the historical probability that location was visited by the user .the real entropy can be expressed as follows : }\ ] ] where is the probability of finding a particular time - ordered subsequence in the trajectory .the shannon entropy is inherently higher than the real entropy for users with sufficient tweet history .the real entropy can be estimated using a lempel - ziv ( lz ) algorithm that searches for repeated sequences .based on fano s inequality , the maximum bound of the user s predictability can be obtained from the entropy via the following equation : the binary entropy function is given by : and kz wrote the main manuscript text .all authors conducted data analysis and results synthesis .all authors reviewed the manuscript .balcan d , colizza v , goncalves b , hu h , ramasco jj , vespignani a. multiscale mobility networks and the spatial spreading of infectious diseases .proceedings of the national academy of sciences .2009;106(51):21484 - 21489 .tizzoni , m. , bajardi , p. , decuyper , a. , king , g. k. k. , schneider , c. m. , blondel , v. , ... colizza , v. ( 2014 ) . on the use of human mobility proxies for modeling epidemics .plos computational biology , 10(7 ) , e1003716 .wilson t , bell m. comparative empirical evaluations of internal migration models in subnational population projections .journal of population research . 2004;21(2):127 - 160 .jiang , s. et al . a review of urban computing for mobile phone traces : current methods , challenges and opportunities . in proceedings of the 2nd acm sigkdd international workshop on urban computing ( p. 2 ) .( 2013 , august ) .rhee i , shin m , hong s , lee k , kim sj , chong s. on the levy - walk nature of human mobility .ieee / acm transactions on networking ( ton ) .2011;19(3):630 - 643 .zhao k , musolesi m , hui p , rao w , tarkoma s. explaining the power - law distribution of human mobility through transportation modality decomposition . 2014 .arxiv preprint arxiv:1408.4910 .chaintreau a , hui p , crowcroft j , diot c , gass r , scott j. impact of human mobility on opportunistic forwarding algorithms .ieee transactions on mobile computing .2007;6(6 ) : 606 - 620 .zhang , y. , wang , l. , zhang , y. q. , li , x. ( 2012 ) . towards a temporal network analysis of interactive wifi users .epl ( europhysics letters ) , 98(6 ) , 68002 .frank mr , mitchell l , dodds ps , danforth cm .happiness and the patterns of life : a study of geolocated tweets . scientific reports .wang , q. , taylor , j. e. ( 2014 ) . quantifying human mobility perturbation and resilience in hurricane sandy .plos one , 9(11 ) , e112608 .yan xy , han xp , wang bh , zhou t. diversity of individual mobility patterns and emergence of aggregated scaling laws .scientific reports , 2013;3 .wesolowski , a. , eagle , n. , noor , a. m. , snow , r. w. , buckee , c. o. ( 2013 ) . the impact of biases in mobile phone ownership on estimates of human mobility .journal of the royal society interface , 10(81 ) , 20120986 laherrere j , sornette d. stretched exponential distributions in nature and economy : fat tails with characteristic scales .the european physical journal b - condensed matter and complex systems .19982(4 ) , 525 - 539 . ( 1998 ) lu , x. , wetter , e. , bharti , n. , tatem , a. j. , and bengtsson , l. ( 2013 ) . approaching the limit of predictability in human mobility .scientific reports , 3 .song , c , et al .`` limits of predictability in human mobility . ''science 327.5968 ( 2010 ) : 1018 - 1021 .gambs , s. , killijian , m. o. , del prado cortez , m. n. ( 2012 , april ) .next place prediction using mobility markov chains . in proceedings of the first workshop on measurement , privacy , and mobility ( p. 3 ) .acm .isaacman s , becker r , cceres r , martonosi m , rowland j , varshavsky a , willinger w. 2012 , june .human mobility modeling at metropolitan scales . in proceedings of the 10th international conference on mobile systems , applications , and services ( pp .239 - 252 ) .cameron , mark a. , et al .`` emergency situation awareness from twitter for crisis management . ''proceedings of the 21st international conference companion on world wide web .acm , 2012 .ester m , kriegel hp , sander j , xu x. a density - based algorithm for discovering clusters in large spatial databases with noise . in kdd .1996;96:226 - 231 .
|
understanding human mobility is crucial for a broad range of applications from disease prediction to communication networks . most efforts on studying human mobility have so far used private and low resolution data , such as call data records . here , we propose twitter as a proxy for human mobility , as it relies on publicly available data and provides high resolution positioning when users opt to geotag their tweets with their current location . we analyse a twitter dataset with more than six million geotagged tweets posted in australia , and we demonstrate that twitter can be a reliable source for studying human mobility patterns . our analysis shows that geotagged tweets can capture rich features of human mobility , such as the diversity of movement orbits among individuals and of movements within and between cities . we also find that short and long - distance movers both spend most of their time in large metropolitan areas , in contrast with intermediate - distance movers movements , reflecting the impact of different modes of travel . our study provides solid evidence that twitter can indeed be a useful proxy for tracking and predicting human movement .
|
recent advances in theoretical astrophysics have mostly been led by the rapid progress in supercomputing , of the computers themselves and the numerical techniques .in particular , hydrodynamical simulations coupled with gravity have proved to be a powerful tool to reveal the dynamics of many astrophysical and cosmological phenomena such as supernovae , star formation , relativistic jets , accretion discs , formation of large scale structure etc .most simulations deal with simplified models , assuming some symmetry and solving equations with reduced dimensions . in some fields , however , there are growing rationale that multidimensional effects can play a key role , and some phenomena are essentially multidimensional , meaning that numerical simulations also have to be carried out with full dimensionality .this itself can dramatically increase the numerical cost while at the same time , there are some studies where small scale effects can alter the global behaviour . in such casesit is necessary to resolve fine structures , making the calculation even more costly .such computationally expensive calculations have become possible by making full use of state - of - the - art supercomputers , with the aid of a combination of efficient numerical schemes and parallelization technologies . however , computational resources for these large scale simulations are still limited , and it is often difficult to carry out systematic studies . in astrophysical hydrodynamical simulations , it is usually not the hydrodynamics part that dominates the computational time . instead, what prevents us from extending calculations to higher dimensions and higher resolutions , is the additional physics such as radiative transfer , nuclear reactions , neutrino transport , self - gravity etc . in order to carry out systematic studies in multi - dimensions, it is mandatory to construct rapid methods to treat these additional features .these additional effects are included by solving the governing equations of that feature and coupling it to the hydrodynamic euler equations , or by applying approximated models based on feasible assumptions . for the case of self - gravity ,the additional basic equation is the poisson equation : where is the laplace operator , the newtonian gravitational potential , the gravitational constant and the mass density .this equation is an elliptic type partial differential equation ( pde ) which can only be solved via direct matrix inversions or iterative methods or fast fourier transform . despite the efforts made in the past few decades to construct rapid poisson solvers , it still remains the pain in the neck for many astrophysical hydrodynamic simulations .it becomes most problematic in multidimensional simulations with eulerian schemes , and is sometimes approximated by monopoles even though the hydrodynamics are multidimensional .the problem stems from the mathematical character of the equation itself , where the value on each cell depends on information from every other cell .this makes it extremely inefficient for parallelization , due to the huge amount of communication among memories and slowing down the whole calculation .the situation gets increasingly worse as the size of the simulation increases . on the other hand ,the equation for general relativity is the einstein field equations . when formulated as an initial value problem ,the einstein equations indicate that the evolution of gravitational fields are governed by a hyperbolic equation as long as it initially satisfies the hamiltonian and momentum constraints .this implies that gravity is essentially hyperbolic , and its evolution only depends on its local neighbourhood . in this paperwe propose a new method to circumvent the problems in newtonian gravity , by incorporating the hyperbolicity of general relativity into the poisson equation .our new method can significantly reduce the computational cost of self - gravitational calculations . instead of the poisson equation ( [ poissoneq .] ) , we choose to solve an inhomogeneous wave equation where we define as the propagation speed of gravitation .this equation was motivated from the essentially hyperbolic nature of gravity in general relativity .it roughly corresponds to the weak field limit of the einstein equations .the newtonian limit is achieved by assuming an infinite , which is the cause of the difficulties , but here we just assume it is large , and not take the limit . similar to electromagnetic fields , this equation will introduce causality , and the solution will therefore be somewhat like a retarded potential . in this way ,eq.([hneq . ] ) can easily be parallelized since it is a hyperbolic pde and only requires communication of memories between neighbouring cells .our approach seems similar to the method introduced by where they convert the poisson equation into a parabolic equation .however , the nature of a parabolic pde and hyperbolic pde is totally different , thus introducing different advantages and disadvantages to the method .one important parameter that needs to be set is the value for the gravitation propagation speed .a large enough will give us an equivalent solution to the poisson equation , which is desired from the newtonian point of view , but the computational time will be large due to the strict courant - friedrichs - lewy condition . if we take a lower value for , the computation will speed up , but the solution will deviate from that of the poisson equation because the time derivative becomes comparable with the other terms .thus the value for should be chosen carefully for each simulation according to the required accuracy and the computational resources available . yetwe show later in this paper that can be taken relatively small without affecting the solution , and can dramatically improve the numerical efficiency of self - gravity .this paper is organized as follows : in section 2 , we will explain the numerical setups and methods used for our test calculations of our new method , the results will be shown in section 3 and we will discuss the errors in section 4 . the conclusion will be given in section 5 .we performed several calculations to demonstrate the efficiency of our new method .firstly , we checked how well our new method maintains the equilibrium of a polytrope sphere in two - dimensions ( 2d ) and three - dimensions ( 3d ) .secondly , we simulate the head - on collision of equal mass polytropes in 2d .we use a hydrodynamical code which solves the ideal magnetohydrodynamic equations with the finite volume method , using the hlld - type approximate riemann solver .since magnetic fields are ignored in our calculations , it is equivalent to using the hllc scheme .cylindrical coordinates are used for 2d simulations assuming axisymmetry whereas 3d simulations are carried out in cartesian coordinates .an ideal gas equation of state with an adiabatic index is used for all calculations .an outgoing boundary condition is used for the outer boundaries . for self - gravitywe solve two different equations ; eq.([hneq . ] ) and the poisson equation , for comparison . an iterative method called the miccg method is used to solve the poisson equation , with boundary values given by multipole expansion .] ) is solved by simple discretization with the aid of the cartoon mesh method to simplify the cylindrical geometry in the 2d tests .robin boundary conditions are applied for the outer boundary . as for the value of , we normalize it by the characteristic velocity where is the sound speed , is velocity , and is an arbitrary parameter that should be larger than unity .the timestep condition for the wave equation will become times stricter than for the hydrodynamical part . although the gravity and hydrodynamical equations should essentially be solved simultaneously , here we choose to solve them on separate timelines . in this way , the wave equation will be solved times during one hydrodynamical timestep , and will reduce the computational cost . owing to the fact that the wave equation only depends on the density distribution , and since the density distribution does not significantly change during one timestep , this will give sufficient accuracy . it should also be noted that the courant number used to decide the timesteps for the gravity and hydrodynamical parts do not necessarily need to coincide .if we take larger courant numbers for the gravity part than the hydrodynamics , the computational cost can be reduced even more . in this paperwe simply take both courant numbers to be 0.3 , but the results did not change even for larger courant numbers such as 0.9 . for the first test calculation , we place a polytrope sphere with a polytropic index at the centre of the 2d cylindrical grid .the sphere has a mass and radius of .the computational domain is taken approximately twice the stellar radius in both radial and longitudinal directions , and divided into cells .a dilute atmosphere is placed around the star , with a mass negligible compared to the stellar mass .we simply wait for several dynamical times to see whether the star stays in mechanical equilibrium .two simulations are carried out for comparison , one by solving the poisson equation throughout ( p model ) , and one by solving eq.([hneq . ] ) with ( h model ) .the initial condition is given by solving the poisson equation in both cases .as a demonstration of 3d capabilities , we place the same polytrope sphere at the origin of a three - dimensional cartesian grid .plane symmetry is assumed for all three directions , which will leave us with an eighth of the star .the computational domain is taken times the stellar radius in each direction , and divided into cells .the resolution of the grid is equivalent to the first test calculation . to make it a 3d specific problem, we add random density perturbations with an amplitude of .this will induce some stellar oscillation modes but overall , the star should stay in a stable state .since we use a relatively large number of cells , it is extremely difficult to solve the poisson equation .in fact , it was impossible on our workstation to solve in a realistic timescale , so we interpolate from the exact solution as an initial condition for the gravitational potential instead . to test a more dynamically evolving case , we place another identical polytrope sphere cm away from the centre of the region in the longitudinal direction on a 2d cylindrical grid .we assume equatorial symmetry , which mirrors the star on the opposite side .since we do not give any orbital motions , the two stars will simply fall into each other by the gravitational force of each other , causing a head - on collision .like in the first test , we carry out the simulation with the two different types of self - gravity for comparison , and call them the p and h models .fig.[fig : statstar ] shows the density distribution of the initial condition on the left side , and dynamical times later on the right side for the h model .both panels show almost identical distributions , indicating that hydrostatic equilibrium of the star is well resolved with this grid .the degree of equilibrium can be checked in fig.[fig : rhocvirial ] , which show the evolution of central density and the degree of satisfaction of the virial theorem ( ) defined as where and are the internal and gravitational energies integrated over the bound zones ( zones with negative total energy ) . the initial condition for the polytrope sphereis given simply by interpolation from the exact solution .so as soon as the simulation starts , the star tries to adjust to its equilibrium condition on the discretized grid .this leads to a slow decrease in the central density , but the decline rate is extremely slow and it is safe to assume that the star is resolved properly on this grid , with both methods .there is a roughly dynamical timescale oscillation in the value of in both p and h models .but the amplitude is very small and does not grow , which indicates that the star satisfies the virial equilibrium condition throughout the simulation .the computational time was times shorter for the h model than the p model .density plot for the stationary star test . left panel : initial condition , right panel : s later . ] , lower panel ) in the 2d static star simulations .density is normalized by the initial central density , and is defined in eq.[eq : vc ] .red lines : h model , black lines : p model.[fig : rhocvirial ] ] similar results were obtained for the 3d star case , depicted in fig.[fig:3dvirial ] .the black lines show the non - perturbed star case , which is simply an extention of the h model calculation to 3d and in different coordinates .it is remarkable that the star remains in virial equilibrium even in 3d , at a degree of .the red lines show the evolution of the same star with random density perturbations . there is no notable difference in the evolution of the central density , only declining after dynamical times .the fluctuation around virial equilibrium is larger than the non - perturbed model , but does not grow in time , staying in a stably oscillating state at the same timescale as the 2d test . , lower panel ) in the 3d simulations .density is normalized by the initial central density , and is defined in eq.[eq : vc ] .red lines : perturbed model , black lines : non - perturbed model.[fig:3dvirial ] ] fig.[fig : headonsnap ] shows the density distribution of the head - on collision simulations with the two different methods at two different times .the upper halves of each panel are results for the p model , and the lower halves are for the h model .it can be seen that the two stars fall into each other , causing a head - on collision , forming a shock at the interface .the stars then merge to become a single star , but a part of the envelope is blown away by the shock . although the evolution is delayed by in the h model , the overall behaviour of the dynamics between the two models are quite similar .this already indicates that our new method can at least be used for qualitative studies .moreover , the total computational time of the h model was times shorter than the p model , implying that our new method is most efficient for dynamically evolving gravitational fields .this is because when the gravitational potential is moving , the miccg method needs more iterations than stationary situations to converge to its solution .snapshots of the density distribution for the head - on collision simulations . upper halves of each panel ; p model , lower halves ; h model .the time elapsed are written on the left corners , and white line at the centre shows the coordinate axis . ]here we are not interested in the physics of the test calculations carried out , but in the difference between the two methods . our aim was to produce an efficient method to treat self - gravity , that reproduces the same results as with previous methods which solve the poisson equation . in this sectionwe quantitatively evaluate the differences between the solution obtained in our simulations by the new method and the solution of the poisson equation .we focus on the head - on collision simulation , since it had the largest difference and because we are more interested in applying our new method to dynamically evolving systems .one of the main causes of the difference between the two methods is the boundary condition . for the poisson solver , we use dirichlet boundary conditions with values given by multipole expansion , which obtains the exact solution for the poisson equation for the given density distribution . on the other hand ,the boundary condition used in the new method is a robin boundary , which is equivalent to assuming monopole gravity .since the higher order terms are non - negligible in the current situation , this boundary condition is inappropriate .we carry out several additional simulations to quantify the effects of the boundary condition , and seek how to improve the results .the parameters used in our extra simulations are listed in table [ tab : models ] .eq.([hneq . ] ) is used for self - gravity in all models .model ar05 corresponds to the h model explained above .three different modifications are made to single out the effects of the boundary condition . in the first approach ,we apply the dirichlet boundary condition by multipole expansion as in the p model ( ad05 ) .this will directly remove the boundary error , although the calculation becomes heavy and is inappropriate for practical use .our second approach is to widen the computational domain without changing the resolution ( ar05-cr05 ) , which will weaken the multipole effects at the boundary .finally , we also change the value of to a larger value ( ar20-cr20 ) , which should bring our equation closer to the poisson equation .we define the relative `` error '' as is the gravitational potential calculated with our new method , and is the solution for the poisson equation at the given density distribution .hence the average relative error is where the integrals are taken over the entire region ..model descriptions [ cols="^,^,^,^,^,^,^ " , ] [ tab : models ] fig.[fig : diffevo ] shows the time evolution of the average relative error in each model .all lines fluctuate around a certain value , indicating that the error does not pile up in most cases .the maximum error was even in our `` worst '' model ( ar05 , ar20 ) .this is the cause of the delay in the collision time .the error was reduced most when the dirichlet boundary condition was applied ( ad05 ; red dashed line ) , where the error does not exceed throughout the calculation , and the delay time also became negligible .this is a surprisingly good agreement , and proves that the difference of our method to previous ones only arise from the boundaries .our hypothesis is further verified by the other simulations with larger computational regions .the relative error is roughly inverse proportional to the number of zones , from in zones to in zones . the wider the region , the smaller the errors .this is because the relative contribution of the boundary to the computational domain is smaller for wider regions , and also the multipole effects are weakened at the boundary .another fact to be noted is that the error does not strongly depend on the value of used in the simulation .the average error simply fluctuates around a value determined only by the domain size , at a frequency proportional to where is the size of the domain .in fact , even if we take , the overall behaviour is indistinguishable with other models as long as we take a large enough region .at the most turbulent and messy situations like after the collision ( s ) , the errors rise higher in the lower models because they can not react fast enough to rapidly evolving systems .the reason for the oscillations in the errors can be understood by decomposing the gravitational potential into two parts .here we assume that is the solution for the poisson equation ( ) , and is the deviation from it .if we plug this in to eq.([hneq . ] ) and use the poisson equation , we are left with this is the equation which describes the creation and propagation of the error .if the initial condition satisfies the poisson equation , i.e. , the only errors are generated by the source term on the right hand side and the boundary conditions .besides the boundary , the source of the error is apparently the second time derivative of the gravitational potential , which is determined by the motion of the density distribution .this is why the error rised at the later times in fig.[fig : diffevo ] where it was turbulent and messy .due to the fact that this is a wave equation , any errors that are generated will propagate away out of the boundaries .the creation and propagation of errors is what causes the small oscillations of the errors in all of our test calculations .the amplitude of the errors are determined by the magnitude of this source term , which can be estimated by combining the poisson equation , continuity equation and equation of motion . by taking the time derivative of the poisson equation and using the continuity equation, one can get and then similarly by taking the time derivative again and using the equation of motion , one can obtain something like depending on the physics included . from this equation , it can easily be estimated that so if we normalize eq.([erreq . ] ) by the original eq.([hneq . ] ) , we can say that the relative amplitude of the error is roughly provided that is taken larger than the characteristic velocity , or when there is not so much accelerating motion , the right hand side on eq.([erreq . ] ) can be assumed to be sufficiently small .time evolution of the average relative error for each model .colours of the lines denote the domain size as described in table [ tab : models ] . dashed line : ad05 model .solid lines : models , dotted lines : models . ] from the above results , we conclude that our new method can be safely used even for dynamically evolving systems provided that is chosen large enough and the outer boundary condition is given appropriately .robin boundaries seem to be appropriate for any kind of application due to the fact that gravitational forces can be well approximated by monopoles at large distances from the source .it is also numerically efficient since it only requires information of the neighbouring cell .the only problem is that the boundary should be taken far enough from the source to reduce the errors sufficiently .larger regions lead to larger computational cost , weakening the advantage of the new method .one possible workaround is to take a wider region only for the gravitational potential , and solving eq.([hneq . ] ) on an extended grid exceeding the region for hydrodynamics .if the density near the hydro boundaries are low enough , or in other words , the total mass inside the region is conserved , it will be possible to approximate that the extended regions are close to vacuum , and calculate the wave equation without the source term .the cost for the wave equation is cheap , thus we can extend the region relatively easily without increasing the total cost .periodic boundaries are also suitable for this method whenever appropriate . in such cases ,the average density of the computational region should be subtracted from the source term of eq.([hneq . ] ) .fig.[fig : comptime ] shows the computational time it took until the stars come in contact ( s ) for each method .calculations were carried out on a 172.8 gflops machine with openmp parallelization on 8 threads .it can be seen that the computational time for the gravity part ( dashed lines ) can be dramatically reduced compared to previous methods , and the benefit becomes more prominent as the scale of the calculation increases . since our test simulation was dominated by the gravity part with previous methods , the new method improved the overall performance directly .almost of the computational time was spent on the gravity part using the poisson solver , whereas the fraction is with the new method .this is a remarkable improvement , since it is not so common with existing solvers that the time spent on the gravity solver is negligible compared to the hydrodynamics . for other cases where the computational time is dominated by other implementations , the improvement in the gravity part may not be so critical , e.g. in core collapse simulations which implement detailed microphysics , the fraction of time used for computing gravity is typically below ,so the reduction of the total time will be at most .the computational time for the gravity part with this method scales linearly to the number of cells , which is much better than previous methods which usually scale as or .multigrid methods are supposed to scale as too , but the absolute number of operations are obviously much smaller with the new method and much more simpler. our method will suit even more on even larger scale simulations parallelized by mpi .in these cases the communication between memories are sometimes the bottleneck , but our new method will not be restricted by this since it does not require intensive communication .it should also be compatible for adaptive mesh refinement or nested grid techniques , and in this way , the outer boundary can be taken far enough without significantly increasing the computational cost .computational time until the stars contact for different methods .square plots : r05 models , triangle plots : r20 models , circle plots : same calculation with poisson solver .solid lines : total computational time , dashed lines : time for the gravity part only . ]a new method has been introduced to treat self - gravity in eulerian hydrodynamical simulations , by modifying the poisson equation into an inhomogeneous wave equation . as long as the gravitation propagation speed is taken to be larger than the hydrodynamical characteristic speed, the results agree with solutions for the poisson equation depending on the boundary condition .if the errors from the boundary are removed in some manner , by applying dirichlet boundaries or placing the boundary far away , the solution almost perfectly satisfy the poisson equation .the computational time of the gravity part was reduced by an order of magnitude , and it should become more prominent for larger scale simulations .it is also fully compatible for numerical techniques such as parallelization , nested grids , adaptive mesh refinement , extending its superiority over existent methods .the sole parameter that needs to be set is , the gravitational propagation speed .this should ideally be taken as the speed of light , but our test simulations suggest that it can be taken to fairly small values as long as it exceeds the characteristic velocity of the hydrodynamics . considering the computational time it is good that we can take it fairly small , but the effects on the errors should be clarified in future studies .this work was supported by the grants - in - aid for the scientific research from the ministry of education , culture , sports , science , and technology ( mext ) of japan ( nos .24103006 , 24740165 , and 24244036 ) , the hpci strategic program of mext , mext grant - in - aid for scientific research on innovative areas `` new developments in astrophysics through multi - messenger observations of gravitational wave sources '' ( grant number a05 24103006 ) , and the research grant for young scientists , early bird program from waseda research institute for science and engineering .was supported in part by jsps postdoctoral fellowships for research abroad no .27 - 348 .34ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1088/0004 - 637x/722/2/1556 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/764/2/139 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/786/2/83 [ * * , ( ) ] , ( ) , link:\doibase 10.1088/0004 - 637x/801/2/117 [ * * , ( ) ] , link:\doibase 10.1093/mnras / stv587 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/814/1/18 [ * * , ( ) ] , link:\doibase 10.1093/mnras / stv2880 [ * * , ( ) ] , link:\doibase 10.1103/physrevd.77.063002 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/775/1/35 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/807/1/105 [ * * , ( ) ] , link:\doibase 10.3847/2041 - 8205/816/1/l9 [ * * , ( ) ] , link:\doibase 10.3847/0004 - 637x/817/2/153 [ * * , ( ) ] , * * , ( ) http://www.jstor.org/stable/1990745 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/0041-5553(62)90031-9 [ * * , ( ) ] * * , ( ) link:\doibase 10.1086/153729 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/0024-3795(82)90101-x [ * * , ( ) ] * * , ( ) link:\doibase 10.1137/0913035 [ * * , ( ) ] link:\doibase 10.1016/0010 - 4655(94)00185 - 5 [ * * , ( ) ] , * * , ( ) http://stacks.iop.org/0004-637x/583/i=1/a=296 [ * * , ( ) ] http://stacks.iop.org/0067-0049/176/i=1/a=293 [ * * , ( ) ] http://stacks.iop.org/0004-637x/778/i=2/a=181 [ * * , ( ) ] link:\doibase 10.1088/0004 - 637x/770/1/66 [ * * , ( ) ] , link:\doibase 10.1088/2041 - 8205/808/1/l21 [ * * , ( ) ] , _ _ ( , ) _ _ , vol .( , ) _ _ ( , ) link:\doibase 10.1016/j.jcp.2005.02.017 [ * * , ( ) ] link:\doibase 10.1142/s0218271801000834 [ * * , ( ) ] , * * , ( )
|
a new computationally efficient method has been introduced to treat self - gravity in eulerian hydrodynamical simulations . it is applied simply by modifying the poisson equation into an inhomogeneous wave equation . this roughly corresponds to the weak field limit of the einstein equations in general relativity , and as long as the gravitation propagation speed is taken to be larger than the hydrodynamical characteristic speed , the results agree with solutions for the poisson equation . the solutions almost perfectly agree if the domain is taken large enough , or appropriate boundary conditions are given . our new method can not only significantly reduce the computational time compared with existent methods , but is also fully compatible with massive parallel computation , nested grids and adaptive mesh refinement techniques , all of which can accelerate the progress in computational astrophysics and cosmology .
|
_ lisa_pathfinder ( _ _ ) is an _ esa_mission , with _nasa_contributions , whose main objective is to put to test critical parts of _ lisa_(laser interferometer space antenna ) , the first space borne gravitational wave ( gw ) observatory . the science module on board _lisa_technology package ( _ ltp _ ) .the unprecedented sensitivity of the _ ltp_has prompted the conceptual enhancement of __ s science objectives as regards the purity of geodesic , or _free fall _ motion of test masses in the interplanetary gravitational field .free fall control is achieved by gravitational reference sensors ( grs ) .these are a set of capacitive sensors which can determine to high precision ( nano - metres ) the 3d position and orientation of cubic test masses relative to their non - contacting enclosure , which is rigidly linked to the spacecraft structure . detected off - centre deviations trigger action by a set of micro - thrusters which _ move the spacecraft _ such that the mass returns to its centred position .the combination of the grs , the thrusters and the control system ( the dfacs , drag free and attitude control system ) is called _ drag - free _ subsystem .the latter is intended to accurately nullify the effects of any non - gravitational forces acting on the spacecraft .this makes possible to detect _ differential _ gravitational accelerations between the two test masses , whether by precision interferometry or by the drag free system itself .this is fundamental for _ lisa _ , since gravitational waves ( gws ) show up as _ tides _ , i.e. , time varying differential gravitational accelerations .the precision of the measurement done with the _ltp_is required to be \ , { \rm m}\,{\rm s}^{-2}/\sqrt{\rm hz}\ , \quad 1\,{\rm mhz}\leq\frac{\omega}{2\pi}\leq 30\,{\rm mhz } \label{eq.1}\ ] ] we shall refer to the above frequency band as the _ ltp_measuring bandwidth ( mbw ) in the sequel .equation is ten times less demanding than what is needed for _ lisa _ , both in magnitude and in frequency band , yet it is between two and three orders of magnitude better than has been achieved or required so far for space missions .it has relevant consequences for future missions , which need high performance drag free , hence the relevance of _ _ beyond its natural objectives as a _lisa_precursor . ,width=283 ] in order to meet the above requirements , the residual pressure inside the grs must be under 10 pa , a condition which is classified as _ very high vacuum _ by the american vacuum society .this implies that their interior have to be tightly sealed within a _ vacuum enclosure _( ve ) , and non - mechanical getter pumps installed to ensure a suitably rarefied environment around the test masses .perhaps a more obvious option would have been to communicate the ve directly with the external interplanetary vacuum , which is much better than 10 pa .recent studies by the project engineering team have shown that there are serious difficulties with such option .for example , venting out of residual gas has time scales exceeding the very lpf mission lifetime , cleanliness control inside the ve is tighter with the ow , etc .the general layout is shown in figure [ fig.1 ] . as regards the issues we address in this paper , attentionis drawn to the _ optical window _ ( ow ) , which is the interface between the test masses and the optical bench : laser beams must bounce off the test masses to monitor their positions by precision interferometry , hence a transparent window is necessary for the light to make it to the interior of the ve .the ow is a plane - parallel plate and is therefore a potential source of noise : random variations of its optical properties may result in corresponding optical path fluctuations , which distort the laser light phase , hence the optical metrology readout .great care must be taken when manufacturing this critical component of the _ ltp_and , once manufactured , characterisation of its behaviour duly performed .the most important agent responsible for ow fluctuations is _ temperature _ fluctuations .these cause various degrees of mechanical stresses across the rim , as well as temperature dependent index of refraction changes .the former are very difficult to model with quantitative accuracy , mostly due to lack of precise control of mounting interface behaviours , but the former can be much better studied in a stress free environment .this paper is concerned with the experimental characterisation on ground of prototype ows , and with the phenomenological modeling of their response to thermal excitations .this is justified if the noise fluctuations are smaller than the applied stimuli and if the system behaves linearly .the philosophy of the approach is the one typical of the _ diagnostics subsystem _ , as described in .this is : apply controlled temperature signals of high signal - to - noise ratio to the titanium flange where the ow is held see below , and measure the temperature of the former .measure also the induced phase shifts in a laser beam which travels through the ow , then try to establish the transfer function between both magnitudes , i.e. , temperature and phase - shifts .the transfer function thus obtained is assumed to also be valid in the situation when only noise is present in the flange .the latter extrapolation hypothesis is the clue to the determination of the phase noise contributed by temperature fluctuation noise in the ow , on the basis of temperature measurements .the success of the proposed empirical approach depends on our ability to find a transfer function which depends on a ( preferably reduced ) number of parameters , which do not change significantly across different conditions and runs of the experiment .as we shall now show , we have found that a single pole _ _ arma__. process describes rather satisfactorily the relationship we look for .the precise meaning of this concept will be discussed in detail in the following sections , and the results applied to evaluate the temperature fluctuation noise in a dedicated experiment .the paper is organised as follows : in section [ sec.2 ] we describe the experiment layout , including hardware and data acquisition details .section [ sec.3 ] is devoted to the data processing , analysis , two modes of model fitting a direct linear regression and a single pole _arma _ model and numerical results . in section [ sec.4 ]we examine in detail the _arma_(2,1 ) fit , and derive important implications for the understanding of the physical processes happening in the ow .section [ sec.5 ] addresses how the previous analysis can be applied to quantify the contribution of temperature fluctuation noise to the total phasemeter noise , based on another set of experimental results , and section [ sec.6 ] develops an interesting exercise whereby a continuous time model is suggested as the origin of the discrete time _ arma _finally , conclusions and bibliographic references close the article .the current baseline of _ lisa___pathfindfer _ _ and _ lisa_includes vacuum tanks containing the test masses which act as end mirrors for the interferometer . presence of such tanks , or vacuum enclosures ( ve ) , force the inclusion of a transmissive element interfacing between the interior of the ve and the optical bench outside .this optical element is the _ optical window _ ( ow ) . in this sectionwe describe the laboratory hardware and conditions of several runs of measurements conducted in aei hannover laboratory facilities to characterise the thermal behaviour of the ow . in the experiment twodifferent prototype ows were tested .both were manufactured following the same baseline as the one to be applied in the final _ltp_flight model .the main element of the window is a very low thermal expansion coefficient glass chosen in order to minimise the variation of the optical pathlength with respect the temperature .the figure of merit quantified by equation below is of for our particular choice , the _ ohara s - phm52 _( =1.606 , = ) .this parameter can reach values as high as for bk7 or for fused silica .the glass , of 30 mm diameter and 6 mm length was clamped between two titanium flanges , fastened by means of titanium bolts , and sealed by two _ helicoflex_ rings to prevent gas leakage in space conditions .the ow is expected to induce thermal related noise in the metrology subsystem . in order to quantify its contribution to the total noise budgeta set of thermal diagnostics items were attached to the optical window prototypes .they are shown in figure [ fig : exp ] , left panel : two kapton heaters _minco hk5303 _ attached to the titanium flange lateral surface , and four glass encapsulated thermistors _betatherm g10k4d853 _ attached in pairs to the titanium flange and on the athermal glass surface , for precision temperature measurements .these diagnostics items were all glued to their attachment points with pressure sensitive adhesive ( psa ) tape _ 3m-966 _ , of similar characteristics to the one to be used in flight .the temperature sensors on the glass will actually not fly with the _ ltp_. they will however provide relevant information to implement real mission data analysis procedures and methods , for which only the titanium temperature data will be available . during the experiment, the window was leaning vertically on a pvc two - rail structure see figure [ fig : exp ] , right panel , which impeded any high conductivity thermal contact with the rest of the hardware .although not directly affecting the thermo - optical interaction studied here , the ow will be part of the ve in the real _ ltp _ , thus a higher thermal conductance is to be expected , and therefore a faster suppression of thermal gradients is foreseen during mission operations . the complete set - up ( i.e. , the glass plus its mounting structure and the just mentioned diagnostics items ) was inserted as a transmissive element in a dedicated optical bench , as seen in figure [ fig.3 ] .the heaters were covered with aluminium foil to reduce thermal radiation effects ( figure [ fig : exp ] , right ) .for the same reason , the window was introduced in a copper box leaving only a narrow opening for the laser beam to go through . as seen in the schematic of figure [ fig.3 ], the beam traverses the ow only once. this will not be the case in the real _ ltp _ , where the laser will go twice through each window , instead , but the one passage configuration used here simplifies the ow thermal characterisation without information losses .all the experiments were performed under low pressure conditions at a vacuum level .heat pulse applied during .legend indications correspond to the thermometers shown in figure [ fig : exp ] . ]the optical window was subjected to various heat pulses comprising a wide range of duration length and powers in order to identify suitable parameters for the thermal test to be performed in - flight .the data here reported gather 25 experiment runs on two different prototypes , applying heat pulses from to ranging from to of application time .all experiments were performed at room temperature , which falls within the expected range of working temperatures of the _ltp_experiment during operations , required to be between and .figure [ fig.4 ] shows a typical response data plot , with indication of the temperature sensor readings and the interferometrically registered phase shifts corresponding to a specific heat signal input see the figure caption for the details .two different data acquisition systems were used in the experiment : the interferometric data were acquired via the _ltp_phasemeter prototype , whereas the thermal diagnostics data were acquired using the _ltp_front end electronics ( fee ) prototype .both acquisition systems have previously successfully passed tests of compliance with mission noise budgets .the main purpose of this section is to give account of the _ measured _ interferometer output data in terms of the also measured temperature data . while in this experiment both are of course ultimately caused by the heaters signal , our interest focuses on the temperature _ vs. _ phase relationship , as this is the one we need to quantify the magnitude of temperature fluctuations noise during science operations in flight . to serve this purpose , we adopt model fitting techniques .two approaches will be proposed , and discussed in the ensuing section : a direct linear regression fit of the interferometric data to the temperature read - out coming from sensors on the titanium flange and those on the ow glass itself , and an _ arma _ model using only temperature readout from the titanium temperature sensors .the latter is of particular interest , since it is not foreseen that temperature sensors be attached to the glass surface in the real _ltp_. before we attempt to fit the data to a useful model , some data pre - processing is required .the temperature and phase acquisition data systems reside on different hardware and software , and deliver the respective time series data for analysis at sample rates which are different as well : temperature data are sampled at , whereas phase data are sampled at , instead .downsampling and resampling thus needs to be applied to the latter in order to make meaningful sense of data fitting algorithms . to avoid aliasing effects at downsampling phase ,suitable low pass filters are applied .this is however not enough to have matched sampling times in both time series , so an additional interpolation algorithm is used for properly matched resampling .in addition , each data segment is de - trended prior to model fitting .the removed trend is evaluated from the first seconds previous to the heat input signal begins .this way we get rid of environmental drifts and spurious trending effects . a typical phasemeter response when heat pulses are applied to the ow is shown in figure [ fig.4 ] .an essentially instantaneous phase response is observed in coincidence with thermometers excitations , which suggests phase behaviour can be described as a direct , or single - time relationship between the various temperature readings and associated phase shifts . if we additionally make the hypothesis that such relationship is _ _ linear _ _ then the model is given by where is the temperature read by the thermometer on the titanium flange closest to the activated heater , and that of a thermometer on the ow glass .the parameters and are to be estimated by a least squares algorithm , which requires the square error - p_1\,t_{\rm ti}[n ] -p_2\,t_{\rm glass}[n]\right\}^2 \label{eq.3}\ ] ] to be the smallest possible for the given data streams . here , ] and \theta ] .finally , is an abbreviation for the vector of _ arma _ parameters .system identification in this approach is again based on a least squares criterion , for which a suitably defined square error needs to be defined . following ,this is the so called _ prediction error _ : - g(q,{\mbox{\boldmath}})\,t_{\rm ti}[n]\right\}^2 \label{eq.9}\ ] ] the estimates of the parameters are those which cause to be minimum .algorithms to find them are more robust if the additional hypothesis holds that the residuals - g(q,{\mbox{\boldmath}})\,t_{\rm ti}[n]\right\} ] and ] and the temperature at the titanium flange ] , equationis approximated by -\left(1+\frac{\delta t}{\tau}\right)^{-1}\,\phi[n-1 ] & = & a\,\left(1+\frac{\delta t}{\tau}\right)^{-1}\,\left\ { t_{\rm ti}[n]-t_{\rm ti}[n-1]\right\ } \nonumber \\ & + & b\,\delta t \left(1+\frac{\delta t}{\tau}\right)^{-1}\ , t_{\rm ti}[n ] \label{eq.16}\end{aligned}\ ] ] this can be readily compared to equation to obtain is seen to have a value very close to ( table [ tab.1 ] ) , or = with comfortably in all cases .hence , i.e. , , which _ a posteriori _ justifies the approximation leading to equation .the formal solution to equation can be easily written down .after initial transients die out , the phase is given by the meaning of this filter equation is better understood if we recast it in frequency domain : \,\widetilde{t}_{\rm ti}(\omega ) \label{eq.18c}\ ] ] this equation shows again that the analog process is also the superposition of two contributions : a _ high - pass _ filter proportional to , and a _ low - pass _ contribution proportional to : the first arises in equation due to the titanium temperature derivative , while the second appears related to the term proportional to the titanium absolute temperature .this split dependence of the ow response to temperature pulses points to two different physical thermal processes affecting the glass , as already discussed in section [ sec.4 - 1 ] .we can now make use of equations to identify the coefficients and in terms of the fit parameter values of table [ tab.1 ] .taking , we find that , and .in addition , we can take advantage of the relationship between the auto - regressive and the dlr model parameters to obtain an expression relating both models . accordingly, equation can be rewritten as if we go back to the dlr fit formula , equation , the following expression ensues : after the term has been been safely neglected in front of .we thus see that temperatures in the titanium flange and in the ow glass are related by a low - pass with a time constant , , of a few hundred seconds note that and have different signs , table [ tab.1 ] .it must be recalled that this relationship emerges out of the good quality of the fits by both dlr and _ arma_(2,1 ) models , and is key to understanding why _ only _ the titanium gauge is required to make a good prediction of the ow response to temperature variations , as will be required in flight . the _ physical reason _ for the observed relationship between temperatures is to be sought in the properties of the interface between the titanium and the glass in the ow .while the optical window is a crucial element in the _ ltp_optical metrology system , it thankfully appears that it is quite stable to temperature fluctuation noise so far as the latter is compliant with mission environmental requirements .the present paper contains a rather thorough analysis of such behaviour , based on experimental data gathered through different runs of on - ground laboratory measurements .our main purpose was to prepare for thermal diagnostics analysis tools in flight , and to gain as much understanding of the underlying physical processes as possible .this means we need to know how noisy data retrieved by thermometers can be converted into phasemeter fluctuations , thereby quantifying the contribution of temperature random variations to the total mission noise budget which is the ultimate objective of _ _ in preparation for _ lisa_. our most relevant finding is the discovery that temperature readings in the titanium flange embracing the ow plane - parallel plate relate to phase values through an _arma_(2,1 ) transfer function .although this is the result of numerical analysis , hence lends itself to parameter estimation variances , it appears to be considerably robust .the analysis has shown that the _arma_(2,1 ) process naturally splits up as the sum of a high - pass and a low - pass process , each of them with significantly different relative weights which result in the high - pass dominating above mhz , while the low - pass takes over in the lower _lisa_frequency band , i.e. , at 0.1 mhz and below .a major achievement of the analysis has been the identification of the physical processes responsible for this behaviour : mechanical stresses induced by differential thermal expansion of metal and glass are associated to the high pass term , while effects account for the low - pass .we consider the analysis presented here as rather complete in some of its essential traits .but there are still open issues which call for further study . for example, heater generation of test signals must be monitored by temperature sensors close to the activated heaters due to lag effects in remoter spots for the procedures described herein to be fully operative .this raises some caveats regarding full applicability of the noise projection algorithms , as the sources of heat dominating a given temperature reading may not be clear in _ ltp_science operation mode .a more global tool for full _ ltp_thermal diagnostics , which takes into account the specific features of each individual part of the system must be assembled .research on this is currently underway which will reported in due course .support for this work came from project esp2004 - 01647 of plannacional del espacio of the spanish ministry of education and science ( mec ) .mn acknowledges a grant from generalitat de catalunya , and js a grant from mec .99 garca marn af 2006 interferometric characterization of the optical window for lisa and lisa pathfinder , merkowitz s and livas jc eds proceedings of the _ lisa_symposium # 6 , _ aip conf proc _ * 873 * , 344 - 348
|
vacuum conditions inside the _ ltp_gravitational reference sensor must be under 10 pa , a rather demanding requirement . the _ optical window _ ( ow ) is an interface which seals the vacuum enclosure and , at the same time , lets the laser beam go through for interferometric metrology with the test masses . the ow is a plane - parallel plate clamped in a titanium flange , and is considerably sensitive to thermal and stress fluctuations . it is critical for the required precision measurements , hence its temperature will be carefully monitored in flight . this paper reports on the results of a series of ow characterisation laboratory runs , intended to study its response to selected thermal signals , as well as their fit to numerical models , and the meaning of the latter . we find that a single pole _ arma _ transfer function provides a consistent approximation to the ow response to thermal excitations , and derive a relationship with the physical processes taking place in the ow . we also show how system noise reduction can be accomplished by means of that transfer function . _ keywords _ : _ lisa _ , _ lisa_pathfinder , gravity wave detector , interferometry , thermal diagnostics .
|
self - interest often leads to freeloading on the contributions of others in the dynamics associated with common goods and joint enterprises [ 1,2 ] .as is well known , incentivization , such as rewarding and punishing , is a popular method for harnessing the selfish action and for motivating individuals to behave cooperatively [ 313 ] .experimental and theoretical studies on joint enterprises under various incentive schemes are growing [ 1422 ] .obviously , whether rewards or penalties , sufficiently large incentives can transform freeloaders into full cooperators , and incentives with small impact do nothing on the outcomes [ 22 ] .however , incentivizing is costly , and such heavy incentives often incurs serious costs on those who provide the incentives , whether in a peer - to - peer or institutional manner .previous game - theoretic studies on the evolution of cooperation with incentives have focused on public good games with compulsory participation , and revealed that the intermediate degrees of punishment lead to a couple of stable equilibria , full defection and full cooperation [ 4,5,10,13,22,23 ] . in this bi - stable dynamic ,establishing full cooperation requires an initially sufficient fraction of cooperators , or ex ante adjustment to overcome the initial condition [ 10,23 ] .this situation is a coordination game [ 24 ] , which is a model of great interest for analyzing a widespread coordination problem ( e.g. , in choosing distinct technical standards ) .in contrast to a traditional case with compulsory participation , another approach to the evolution of cooperation is an option to opt out of joint enterprises [ 2537 ] .the opting - out option can make the freeloader problem relaxed : individuals can exit a joint venture when stuck in a state in which all freeload off one another ( `` economic stalemate '' ) , and then pursue a stand - alone project ; if a joint venture with mutual cooperation is more profitable than in isolation , the individuals once exited will switch to contributing to the venture .this situation , however , will also find defection attractive . thus , joint enterprises with optional participation can give rise to a rock - paper - scissors cycle [ 2831 ] .recently , sasaki _ et al ._ [ 22 ] revealed that considering optional participation as well as institutional incentives can effect fully cooperative outcomes for the intermediate ranges of incentives .they demonstrated that opting - out combined with rewarding is not very effective at establishing full cooperation , but opting - out combined with punishment is very effective at establishing cooperation .although there are a series of existing papers on the interplay of punishment and opting - out mechanisms [ 3844 ] , the main points of these earlier studies comprise solving the puzzling issue of second - order freeloading : the exploitation of the efforts of others to uphold incentives for cooperation [ 2 , 4 , 7 , 45 , 46 ] .[ 22 ] consider incentives controlled exclusively by a centralized authority ( like the empire or state ) [ 4750 ] , and thus , their model is already free from the second - order freeloader problem .here we analytically provide a full classification of the replicator dynamics in a public good game with institutional incentives and optional participation .we clarify when and how cooperation can be selected over defection in a bi - stable situation associated with institutional punishment without requiring any ability to communicate among individuals .in particular , assuming that the penalties are large enough to cause bi - stability with both full cooperation and full defection ( no matter what the basins of attraction are ) in cases of compulsory participation , cooperation will necessarily become selected in the long term , regardless of the initial conditions .to describe our institutional - incentive model , we start from public good games with group size .the players in a group are given the opportunity to participate in a public good game .we assume that participation pays a fixed entrance fee to the sanctioning institution , whereas non - participation yields nothing .we denote by the number of players who are willing to participate ( ) and assume that at least two participants are required for the game to occur [ 28,3942 ] .if the game does take place , each of the participants in the group can decide whether to invest a fixed amount into a common pool , knowing that each contribution will be multiplied by and then shared equally among all _ other _ participants in the group .thus , participants have no direct gain from their own investments [ 6,4143,45 ] .if all of the participants invest , they obtain a net payoff .the game is a social dilemma , which is independent of the value of , because participants can improve their payoffs by withholding their contribution .let us next assume that the total incentive stipulated by a sanctioning institution is proportional to the group size and hence of the form , where is the ( potential ) per capita incentive . if rewards are employed to incentivize cooperation , these funds will be shared among the so - called `` cooperators '' who contribute ( see [ 51 ] for a voluntary reward fund ) .hence , each cooperator will obtain a bonus that is denoted by , where denotes the number of cooperators in the group of participants .if penalties are employed to incentivize cooperation , `` defectors '' who do not contribute will analogously have their payoffs reduced by , where denotes the number of defectors in the group of players ( ) .we consider an infinitely large and well - mixed population of players , from which n samples are randomly selected to form a group for each game .our analysis of the underlying evolutionary game is based especially on the replicator dynamics [ 52 ] for the three corresponding strategies of the cooperator , defector , and non - participant , with respective frequencies , , and . the combination of all possible values of with and forms the triangular state space .we denote by c , d , and n the three vertices of that correspond to the three homogeneous states in which all cooperate ( ) , defect ( ) , or are non - participants ( ) , respectively . for ,the replicator dynamics are defined by where denotes the average payoff in the entire population ; , , and denote the expected payoff values for cooperators , defectors , and non - participants , respectively ; and is used to specify one of three different incentive schemes , namely , `` without incentives , '' `` with rewards , '' and `` with punishment , '' respectively . because non - participants have a payoff of 0 , , and thus , .we note that if , the three edges of the state space form a heteroclinic cycle without incentives : n c d n ( figs .2a or 3a ) .defectors dominate cooperators because of the cost of contribution , and non - participants dominate defectors because of the cost of participation . finally , cooperators dominate non - participants because of the net benefit from the public good game with . in the interior of ,all of the trajectories originate from and converge to n , which is a non - hyperbolic equilibrium .hence , cooperation can emerge only in brief bursts , sparked by random perturbations [ 29,41 ] . here, we calculate the average payoff for the whole population and the expected payoff values for cooperators and defectors . in a group with co - participants ( ) , a defector or a cooperator obtains from the public good game an average payoff of [ 41 ] .hence , note that is the probability of finding no co - players and , thus , of being reduced to non - participation .in addition , cooperators contribute with a probability , and thus , . hence , $ ] .we now turn to the cases with institutional incentives .first , we consider penalties .because cooperators never receive penalties , we have . in a group in which the co -participants include cooperators ( and thus , defectors ) , switching from defecting to cooperating implies avoiding the penalty .hence , \nonumber \\ & = & -(c-\delta)(1-z^{n-1 } ) + \delta \frac{x(1-(1-y)^{n-1}}{y},\end{aligned}\ ] ] and thus , \nonumber \\ & = & ( 1-z^{n-1})((r-1)cx - \sigma(1-z ) - \delta y ) - \delta x(1-(1-y)^{n-1}).\end{aligned}\ ] ] next , we consider rewards .it is now the defectors who are unaffected , implying . in a group with co - participants , including cooperators , switching from defecting to cooperating implies obtaining the reward .hence , \nonumber \\ & = & -(c-\delta)(1-z^{n-1 } ) + \delta \frac{y(1-(1-x)^{n-1}}{x},\end{aligned}\ ] ] and thus , \nonumber \\ & = & ( 1-z^{n-1})((r-1)cx - \sigma(1-z ) + \delta x ) - \delta y(1-(1-x)^{n-1}).\end{aligned}\ ] ]we investigated the interplay of institutional incentives and optional participation . as a first step, we considered replicator dynamics along the three edges of the state space . on the dn - edge ( ) , this dynamic is always d n because the payoff for non - participating is better than that for defecting by at least the participation fee , regardless of whether penalties versus rewards are in place . on the nc - edge ( ) , it is obvious that if the public good game is too expensive ( i.e. , if , under penalties or , under rewards ) , players will opt for non - participation more than cooperation . indeed , n becomes a global attractor because holds in .we do not consider further cases but assume that the dynamic of the nc - edge is always n c. on the cd - edge ( ) , the dynamic corresponds to compulsory participation , and eq .( 1 ) reduces to . clearly , both of the ends c ( ) and d ( ) are fixed points . under penalties ,the term for the payoff difference is under rewards , it is because , strictly decreases , and strictly increases , with .the condition under which there exists an interior equilibrium r on the cd - edge is next , we summarize the game dynamics for compulsory public good games ( fig .1 ) . for such a small that , defection is a unique outcome ; d is globally stable , and c is unstable .for such a large that , cooperation is a unique outcome ; c is globally stable , and d is unstable . for the intermediate values of ,cooperation evolves in different ways under penalties versus rewards , as follows . under penalties ( fig .1a ) , as crosses the threshold , c becomes stable , and an unstable interior equilibrium r splits off from c. the point r separates the basins of attraction of c and d. penalties cause bi - stable competition between cooperators and defectors , which is often exhibited as a coordination game [ 24 ] ; one or the other norm will become established , but there can be no coexistence . with increasing , the basin of attraction of d becomes increasingly smaller , until attains the value of . here , r merges with the formerly stable d , which becomes unstable .in contrast , under rewards ( fig .1b ) , as crosses a threshold , d becomes unstable , and a stable interior equilibrium r splits off from d. the point r is a global attractor .rewards give rise to the stable coexistence of cooperators and defectors , which is a typical result in a snowdrift game [ 53 ] .as increases , the fraction of cooperators within the stable coexistence becomes increasingly larger . finally , as reaches another threshold , r merges with the formerly unstable c , which becomes stable .we note that and have the same value , regardless of whether we take into account rewards or penalties .now , we consider the interior of the state space .we start by proving that , for , if an equilibrium q exists in the interior , it is unique . for this purpose, we introduce the coordinate system in , with , and we rewrite eq .( 1 ) as dividing the right - hand side of eq . ( 10 ) by , which is positive in , corresponds to a change in velocity and does not affect the orbits in [ 52 ] . using eqs .( 3)(6 ) , this transforms eq .( 10 ) into the following . under penalties ,( 10 ) becomes , \nonumber \\ \dot{z } & = & z(1-z)[\sigma + \delta - ( ( r-1)c + \delta)f + \delta f(1-f)h(f , z)],\end{aligned}\ ] ] whereas under rewards , it becomes , \nonumber \\\dot{z } & = & z(1-z)[\sigma - ( ( r-1)c + \delta)f + \delta f(1-f)h(1-f , z)],\end{aligned}\ ] ] where ^{n-1}}{(1-f)(1-z^{n-1 } ) } = \frac{1+[f+(1-f)z]+ \cdots + [ f+(1-f)z]^{n-2}}{1+z+ \cdots + z^{n-2}}.\ ] ] note that and . at an interior equilibrium q , the three different strategies must have equal payoffs , which , in our model , means that they all must equal 0 .the conditions under penalties and under rewards imply that is given by respectively .thus , if it exists , the interior equilibrium q must be located on the line given by . from eqs . ( 11 ) and ( 12 ) , q must satisfy in the specific case when , by solving eqs .( 14 ) and ( 15 ) with , we can see that the dynamic has an interior equilibrium only when under penalties or under rewards . at this moment , the aforementioned line consists of a continuum of equilibria and connects r and n ( fig .this is a degenerate case of the interior equilibrium , but in sasaki _[ 22 ] , this case was not clearly distinguished from the general form described below .we next show that is uniquely determined in the general case for . both equations in eq .( 15 ) have at most one solution with respect to . because is independent of , it is sufficient to show that is strictly monotonic for every .we first consider penalties . a straightforward computation yields \nonumber \\& = & \frac{(n-1 ) z^{n-2}}{(1-f)(1-z^{n-1})^2 } \nonumber \\ & & \times \left [ 1-\left\ { \left ( \frac{f+(1-f)z}{z } \right ) ( ( 1-f)+fz ) \right\}^{n-2 } \frac{(1-f)+fz^{n-2}}{((1-f)+fz)^{n-2 } } \right ] .\nonumber \\ & & \end{aligned}\ ] ] we note that and this inequality obviously holds for . by induction for every larger ,if it holds for , it must hold for because consequently , the square bracketed term in the last line of eq .( 16 ) is negative .thus , for every .we now consider rewards and use the same argument as above .this concludes our proof of the uniqueness of q. for , as increases , q splits off from r ( with ) and moves across the state space along the line given by eq .( 14 ) and finally exits this space through n. the function decreases with increasing , and the right - hand side of eq .( 15 ) decreases with increasing , which implies that increases with . by substituting eq . ( 13 ) into eq .( 15 ) , we find that the threshold values of for q s entrance ( ) and exit ( ) into the state space are respectively given by where ( and ) under penalties , and ( and ) under rewards .we note that , which is an equality only for .we next prove that for , q is a saddle point .we first consider penalties using eq .( 11 ) . because the square brackets in eq . ( 11 ) vanish at q , the jacobian at q is given by \ , & \,\ , \displaystyle \delta f(1-f)z(1-z ) \frac{\partial h}{\partial z } \end{pmatrix } \right|_{\rm q},\ ] ] where and . using , , and , which yields \frac{\partial h(f , z)}{\partial z } < 0.\ ] ] therefore, q is a saddle point .we next consider rewards using eq .similarly , we find that the jacobian at q is given by \ , & \,\ , \displaystyle \delta -f(1-f)z(1-z ) \frac{\partial h}{\partial z } \end{pmatrix } \right|_{\rm q},\ ] ] where and is as in eq .( 21 ) . using , , and , it follows again that .threrefore , q is a saddle point . here, we analyze in detail the global dynamics using eqs . ( 11 ) and ( 12 ) , which are well defined on the entire unit square .the induced mapping , , contracts the edge onto the vertex n. note that and as well as both ends of the edge , and , are hyperbolic equilibria , except when each undergoes bifurcation ( as shown later ) .we note that the dynamic on the -edge is unidirectional to without incentives .first , we examine penalties . from eq .( 11 ) , the jacobians at c and are respectively given by \end{pmatrix } \quad \text{and } \quad j_{{\rm n}_1 } = \begin{pmatrix } c-2\delta & 0 \\ 0 & ( r-1)c-\sigma \end{pmatrix}.\ ] ] from our assumption that , it follows that if , then , and thus , c is a saddle point ; otherwise , and , and thus , c is a sink .regarding , if , is a source ( and ) ; otherwise , is a saddle ( ) .next , the jacobians at d and are respectively given by if , d is a saddle point ( ) , and is a sink ( and ) ; otherwise , d is a source ( and ) , and is a saddle point ( ) .we also analyze the stability of r. as increases from to , the boundary repellor enters the cd - edge at c and then moves to d. the jacobian at r is given by its upper diagonal component is positive because and , whereas the lower component vanishes at .therefore , if , r is a saddle point ( ) and is stable with respect to ; otherwise , if , r is a source ( and ) .in addition , a new boundary equilibrium can appear along the -edge . solving in eq . (11 ) yields ; thus , s is unique .s is a repellor along the edge ( as is r ) .as increases , s enters the edge at ( for ) and exits it at ( for ) .the jacobian at s is given by again , its upper diagonal component is positive . using , we find that the sign of the lower component changes once , from positive to negative , as increases from to .therefore , s is initially a source ( and ) but then turns into a saddle point ( ) , which is stable with respect to .we give a full classification of the global dynamics under penalties , as follows . 1 . for ( fig .c and d are saddle points , is a source , and is a sink .there is no other equilibrium , and holds in the interior state space .all interior orbits originate from and converge to . is globally stable . after applying the contraction map, we find that the interior of is filled with homoclinic orbits originating from and converging to n. 2 . as crosses ( fig .2b ) , c becomes a sink , and the equilibrium r enters the cd - edge at c. r is unstable along that edge but is stable with respect to .therefore , there is an orbit originating from and converging to r that separates the basins of attraction of c and .all of the orbits in the basin of have their -limits at , as before .hence , the corresponding region in is filled with homoclinic orbits and is surrounded by a heteroclinic cycle n r d n. however , if the population is in the vicinity of n , small and rare random perturbations will eventually send the population into the basin of attraction of c ( as is the case for ) .3 . as crosses ( fig .2c ) , becomes a saddle point , and a new equilibrium s enters the -edge at .s is a source .as increases , s moves toward .if holds , then for , there is still an orbit originating from s and converging to r that separates the state space into basins of attraction of c and .all of the orbits in the basin of have their -limits at , as before . in , the separatrix nr and the nc - edge now intersect transversally at n , and the entrance of a minority of participants ( including cooperators and defectors ) into the greater population of non - participants may be successful .4 . as crosses ( fig .2d ) , the saddle point q enters the interior of through r , which becomes a source .based on the uniqueness of q and the poincar - bendixson theorem ( [ 52 ] , appendix a ) , we can see that there is no such homoclinic orbit originating from and converging to q , and the unstable manifold of q must consist of an orbit converging to c and an orbit converging to ; the stable manifold of q must consist of an orbit originating from d and an orbit originating from s ( or , in the case that , from for ) .the stable manifold separates the basins of attraction of c and ; the unstable manifold separates the basin for into two regions .one of them is filled with orbits originating from s ( or from under the above conditions ) and converging to . for ,this means that the corresponding region is filled with homoclinic orbits and is surrounded by a heteroclinic cycle n q n ( fig .as further increases , q moves across , from the cd - edge to the -edge along the line . for , r ands undergo bifurcation simultaneously , and the linear continuum of interior equilibria , which connects r and s , appears only at the bifurcation point ( fig .4a ) . 5 .as crosses ( fig .2e ) , q exits the state space through s , which then becomes saturated . for larger values of , there is no longer an interior equilibrium .s is a saddle point , which is connected with the source r by an orbit leading from r to s. 6 . finally , as crosses ( fig .2f ) , r and s simultaneously exit , through d and , respectively . for , and saddle points , d is a source , and c is a sink . holds throughout the state space .all of the interior orbits originate from d and converge to c. hence , c is globally stable .let us now turn to rewards . from eq .( 12 ) , the jacobians at d and are if , d is a saddle point ( ) ; otherwise , d is a source ( and ) .regarding , if , is a sink ( and ) ; otherwise , is a saddle point ( ) .meanwhile , the jacobians at c and are \end{pmatrix } \quad \text{and } \quad j_{{\rm n}_1 } = \begin{pmatrix } c-\delta & 0 \\ 0 & ( r-1)c-\sigma+\delta \end{pmatrix}.\ ] ] from , it follows that if , c is a saddle point ( ) , and is a source ( and ) ; otherwise , c is a sink ( and ) , and is a saddle point ( ) .we also analyze the stability of r. as increases from to , the boundary attractor r enters the cd - edge at d and then moves toward c. the jacobian at r is given by its upper diagonal component is negative because and , and the lower component vanishes at .therefore , if , r is a saddle point ( ) and unstable with respect to ; otherwise , if , r is a sink ( and ) .similarly , a boundary equilibrium s can appear along the -edge . solving in eq .( 12 ) yields , and thus , s is unique .s is an attractor along the edge ( as is r ) . as increases, s enters the edge at ( for ) and exits at ( for ) .the jacobian at s is \end{pmatrix}.\ ] ] again , its upper diagonal component is positive .using , we find that the sign of the lower component changes once , from negative to positive , as increases from to .therefore , s is initially a sink ( and ) and then becomes a saddle point ( ) , which is unstable with respect to .a full classification of the global dynamics under rewards is as follows . 1 . for ( fig .3a ) , c and d are again saddle points , is a source , and is a sink . holds in the interior state space , and all of the interior orbits originate from and converge to .the interior of is filled with homoclinic orbits originating from and converging to n. 2 .as crosses ( fig .3b ) , d turns into a source , and the saddle point r enters the cd - edge through d. there exists an orbit originating from r and converging to .in contrast to the case with penalties , remains a global attractor .a region separated by the orbit r encloses orbits with as their -limit .therefore , in , the corresponding region is filled with homoclinic orbits that are surrounded by a heteroclinic cycle n c r n. 3 . as crosses , becomes a saddle point , and the equilibrium s enters the -edge at .s is a sink ( and thus , a global attractor ) .as increases , s moves to .if holds , then for , there exists an orbit originating from r and converging to s , which separates the interior state space into two regions .one of these regions consists of orbits originating from , corresponding in to a region filled with homoclinic orbits .the other region consists of orbits originating from d. in , the separatrix rn and the nc - edge intersect transversally at n. 4 . as crosses ( fig .3d ) , the saddle point q enters the interior state space through r , which then becomes a sink .there is no homoclinic loop for q , as before , and now , we find that the stable manifold of q must consist of two orbits originating from d and .the unstable manifold of q must consist of an orbit converging to r and another converging to s or , in the case that , converging to for ( fig .the stable manifold separates the basins of attraction of r and s ( or under the above conditions ) ; the unstable manifold separates the basin for s ( or ) into two regions .one of these regions is filled with orbits issuing from and converging to s ( or ) .the corresponding region in is filled with homoclinic orbits and is surrounded by a heteroclinic cycle n q n ( figs .3c and 3d ) . as continues to increase , q moves through , from the cd - edge to , along the line . for , r and s undergo bifurcation simultaneously , and the continuum of interior equilibria , which connects r and s , appears only at the bifurcation point ( fig .4b ) . 5 . as crosses ( fig .3e ) , q exits the state space through s , which then becomes a saddle point . for larger values of , there is no longer an interior equilibrium .s is connected with the sink r by an orbit from s to r. all of the interior orbits converge to r. 6 . finally , as crosses ( fig .3f ) , r and s simultaneously exit through c and , respectively . just as in the case with punishment , for , and are saddle points , and d is a source . finally , c is a sink . holds throughout the state space .all of the interior orbits originate from d and then converge to c. hence , c is globally stable .we considered a model for the evolution of cooperation through institutional incentives and analyzed in detail evolutionary game dynamics .specifically , based on a public good game with optional participation , we fully analyzed how opting - opt impacts game dynamics ; in particular , opting - out can completely relax a coordination problem associated with punishment for a considerably broader range of parameters than in cases of compulsory participation .we start from assuming that there is a state - like institution that takes exclusive control of individual - level sanctions in the form of penalties and rewards . in our extended model, nobody is forced to enter a joint enterprise that is protected by the institutional sanctioning , however , whoever is willing to enter , must be charged at the entrance .further , if one proves unable or unwilling to pay , the sanctioning institution can ban that person from participation in the game . indeed ,joint ventures in real life are mostly protected by enforceable contracts in which members can freely participate , but are bound by a higher authority .for example , anyone can opt to not participate in a wedding vow ( with donating to the temple or church ) , but once it is taken , it is the strongest contract among enforceable contracts . as far as we know, such higher authorities always demand penalties if contracts are broken . based on our mathematical analysis, we argue that institutional punishment , rather than institutional rewards , can become a more viable incentivization scheme for cooperation when combined with optional participation .we show that combining optional participation with rewards can complicate the game dynamics , especially if there is an attractor with all three strategies : cooperation , defection , and non - participation , present .this can only marginally improve group welfare for a small range of per capita incentive , with ( fig .3b ) . within this interval ,compulsory participation can lead to partial cooperation ; however , optional participation eliminates the cooperation and thus drives a population into a state in which all players exit .hence , freedom of participation is not a particularly effective way of boosting cooperation under a rewards scenario . under penalties ,the situation varies considerably .indeed , as soon as ( fig .2b ) , the state in which all players cooperate abruptly turns into a global attractor for optional participation .when just exceeds , group welfare becomes maximum .meanwhile , for compulsory participation , almost all of the ( boundary ) state space between cooperation and defection still belongs to the basin of attraction of the state in which all players defect . because , where is the group size , and is the net contribution cost ( a constant ) , when is larger , the minimal institutional sanctioning cost to establish full cooperation is smaller .there are various approaches to equilibrium selection in -person coordination games for binary choices [ 5456 ] . a strand of literature bases stochastic evolution models [ 5759 ] , in which typically , a `` risk - dominant '' equilibrium [ 60 ] that has the larger basin of attraction is selected through random fluctuation in the long run . in contrast , considering optional participation, our model typically selects the cooperative equilibrium which provides the higher group welfare , even if the cooperative equilibrium has the smaller basin of attraction when participation is compulsory than has the defective equilibrium . in the sense of favoring the efficient equilibrium , our result is similar to that found in a decentralized partner - changing model proposed by oechssler [ 61 ] , in which players may occasionally change interaction groups . throughout centralized institutional sanctionsmentioned so far , norm - based cooperation is less likely to suffer from higher - order freeloaders , which have been problematic in modeling decentralized peer - to - peer sanctions [ 2,62 ] .in addition , it is clear that sanctioning institutions will stipulate a lesser antisocial punishment targeted at cooperators [ 63 ] , which can prevent the evolution of pro - social behaviors ( [ 64,65 ] , see also [ 36 ] ) . indeed , punishing cooperators essentially promote defectors , who will reduce the number of participants willing to pay for social institutions . for self - sustainability , thus , sanctioning institutions should dismiss any antisocial schemes that may lead to a future reduction in resources for funding the institution .thus , we find that our model reduces the space of possible actions into a very narrow framework of alternative strategies , in exchange for increasing the degree of the institution s complexity and abstractiveness . in practice , truly chaotic situations which offer a very long list of possibilities are unfeasible and create inconvenience , as is described by michael ende in `` _ the prison of freedom _ '' [ 1992 ] .participants in all economic experiments usually can make their meaningful choices only in a short and regulated list of options , as is the way with us in real life .our result indicates that a third party capable of exclusively controlling incentives and membership can play a key role in selecting a cooperative equilibrium without ex ante adjustment .the question of how such a social order can emerge out of a world of chaos is left entirely open .we thank ke brnnstrm , ulf dieckmann , and karl sigmund for their comments and suggestions on an earlier version of this paper .this study was enabled by financial support by the fwf ( austrian science fund ) to ulf dieckmann at iiasa ( tect i-106 g11 ) , and was also supported by grant rfp-12 - 21 from the foundational questions in evolutionary biology fund .first , we prove that a homoclinic loop that originates from and converges to q does not exist . using the poincar - bendixson theorem [ 52 ] and the uniqueness of an interior equilibrium ,we show that if it does exist , there must be a point inside the loop such that both of its - and -limit sets include q. this contradicts the fact that q is a saddle point .indeed , there may be a section that cuts through q such that the positive and negative orbits of infinitely often cross it ; however , it is impossible for a sequence consisting of all the crossing points to originate from and also converge to the saddle point q. hence , there is no homoclinic orbit of q. next , we show that orbits that form the unstable manifold of q do not converge to the same equilibrium ( indeed , this is a sink ) .if they do , the closed region that is surrounded by the orbits must include a point such that its -limit set is q. using the poincar - bendixson theorem and the uniqueness of an interior equilibrium , the -limit set for must include q ; this is a contradiction .similarly , we can prove that the orbits that form the stable manifold of q do not issue from the same equilibrium .hardin g ( 1968 ) the tragedy of the commons .science 162:12431248 .doi:10.1126/science.162.3859.1243 ostrom e ( 1990 ) governing the commons : the evolution of institutions for collective action .cambridge university press , new york olson e ( 1965 ) the logic of collective action : public goods and the theory of groups harvard university press , cambridge , ma boyd r , richerson p ( 1992 ) punishment allows the evolution of cooperation ( or anything else ) in sizable groups .ethol sociobiol 13:171195 .doi:10.1016/0162 - 3095(92)90032-y sigmund k , hauert c , nowak ma ( 2001 ) reward and punishment .proc natl acad sci usa 98:1075710762 .doi : 10.1073/pnas.161155698 fehr e , gchter s ( 2000 ) cooperation and punishment in public goods experiments .am econ rev 90:980994 .doi:10.1257/aer.90.4.980 oliver p ( 1980 ) rewards and punishments as selective incentives for collective action : theoretical investigations .am j sociol 85:13561375 .doi:10.1086/227168 sigmund k ( 2007 ) punish or perish ?retaliation and collaboration among humans .trends ecol evol 22:593600 .doi:10.1016/j.tree.2007.06.012 rand dg , dreber a , ellingsen t , fudenberg d , nowak ma ( 2009 ) positive interactions promote public cooperation .science 325:12721275 .doi:10.1126/science.1177418 boyd r , gintis h , bowles s ( 2010 ) coordinated punishment of defectors sustains cooperation and can proliferate when rare .science 328:617620 .doi:10.1126/science.1183665 balliet d , mulder lb , van lange pam ( 2011 ) reward , punishment , and cooperation : a meta - analysis .psychol bull 137:594615 .doi:10.1037/a0 gchter s ( 2012 ) social science : carrot or stick ?nature 483:3940 .doi:10.1038/483039a sasaki t , uchida s ( 2013 ) the evolution of cooperation by social exclusion .proc r soc b 280:1752 .doi:10.1098/rspb.2012.2498 andreoni j , harbaugh wt , vesterlund l ( 2003 ) the carrot or the stick : rewards , punishments , and cooperation .am econ rev 93:893902 .doi:10.1257/000282803322157142 grerk o , irlenbush b , rockenbach b ( 2006 ) the competitive advantage of sanctioning institutions .science 312:108111 .doi:10.1126/science.1123633 sefton m , shupp r , walker jm .( 2007 ) the effect of rewards and sanctions in provision of public goods .econ inq 45:671690 .doi:10.1111/j.1465 - 7295.2007.00051.x grerk o , irlenbusch b , rockenbach b ( 2009 ) motivating teammates : the leader s choice between positive and negative incentives .j econ psychol 30:591607 .doi:10.1016/j.joep.2009.04.004 ogorman r , henrich j , van vugt m. ( 2009 ) constraining free riding in public goods games : designated solitary punishers can sustain human cooperation .proc r soc b 276:323329 .doi:10.1098/rspb.2008.1082 hilbe c , sigmund k ( 2010 ) incentives and opportunism : from the carrot to the stick .proc r soc b 277:24272433 . doi:10.1098/rspb.2010.0065 sutter m , haigner s , kocher mg ( 2010 ) choosing the carrot or the stick ?endogenous institutional choice in social dilemma situations .rev econ stud 77:15401566 .doi:10.1111/j.1467 - 937x.2010.00608.x szolnoki a , szab g , czak l ( 2011 ) competition of individual and institutional punishments in spatial public goods games .phys rev e 84:046106 .doi:10.1103/physreve.84.046106 sasaki t , brnnstrm , dieckmann u , sigmund k ( 2012 ) the take - it - or - leave - it option allows small penalties to overcome social dilemmas .proc natl acad sci usa 109:11651169 .doi:10.1073/pnas.1115219109 panchanathan k , boyd r ( 2004 ) indirect reciprocity can stabilize cooperation without the second - order free rider problem .nature 432 : 499502 .doi:10.1038/nature02978 skyrms b ( 2004 ) the stag hunt and the evolution of social structure . cambridge university press , cambridge , uk aktipis ca ( 2004 ) know when to walk away : contingent movement and the evolution of cooperation .j theor biol 231:249260 .doi:10.1016/j.jtbi.2004.06.020 orbell jm , dawes rm ( 1993 ) social welfare , cooperators advantage , and the option of not playing the game .am soc rev 58:787800 .j , kitcher p ( 1995 ) evolution of altruism in optional and compulsory games .j theor biol 175:161171 .doi:10.1006/jtbi.1995.0128 hauert c , de monte s , hofbauer j , sigmund k ( 2002 ) volunteering as red queen mechanism for cooperation in public goods games .science 296:11291132 .doi:10.1126/science.1070582 hauert c , de monte s , hofbauer j , sigmund k ( 2002 ) replicator dynamics for optional public good games .j theor biol 218:187194 .doi:10.1006/jtbi.2002.3067 semmann d , krambeck hj , milinski m ( 2003 ) volunteering leads to rock - paper - scissors dynamics in a public goods game .nature 425:390393 .doi:10.1038/nature01986 mathew s , boyd r ( 2009 ) when does optional participation allow the evolution of cooperation .proc r soc lond b 276:11671174 .doi:10.1098/rspb.2008.1623 izquierdo ss , izquierdo lr , vega - redondo f ( 2010 ) the option to leave : conditional dissociation in the evolution of cooperation .j theor biol 267:7684 .doi:10.1016/j.jtbi.2010.07.039 castro l , toro ma ( 2010 ) iterated prisoner s dilemma in an asocial world dominated by loners , not by defectors .theor popul biol 74:15 .doi:10.1016/j.tpb.2008.04.001 sasaki t , okada i , unemi t ( 2007 ) probabilistic participation in public goods games .proc r soc b 274:26392642 .doi:10.1098/rspb.2007.0673 xu zj , wang z , zhang lz ( 2010 ) bounded rationality in volunteering public goods games .j theor biol 264:1923 .doi:10.1016/j.jtbi.2010.01.025 garca j , traulsen a ( 2012 ) leaving the loners alone : evolution of cooperation in the presence of antisocial punishment .j theor biol 307:168173 .doi:10.1016/j.jtbi.2012.05.011 zhong lx , xu wj , shi yd , qiu t ( 2013 ) coupled dynamics of mobility and pattern formation in optional public goods games .chaos solitons fractals 47:1826 .doi:10.1016/j.chaos.2012.11.012 fowler j ( 2005 ) altruistic punishment and the origin of cooperation .proc natl acad sci usa 102:70477049 .doi:10.1073/pnas.0500938102 brandt h , hauert c , sigmund k ( 2006 ) punishing and abstaining for public goods .proc natl acad sci usa 103:495497 .doi:10.1073/pnas.0507229103 hauert c , traulsen a , brandt h , nowak ma , sigmund k ( 2007 ) via freedom to coercion : the emergence of costly punishment .science 316:19051907 .doi:10.1126/science.1141588 de silva h , hauert c , traulsen a , sigmund k ( 2009 ) freedom , enforcement , and the social dilemma of strong altruism .j evol econ 20:203217 .doi:10.1007/s00191 - 009 - 0162 - 8 sigmund k , de silva h , traulsen a , hauert c ( 2010 ) social learning promotes institutions for governing the commons .nature 466:861863 .doi:10.1038/nature09203 sigmund k , hauert c , traulsen a , de silva h ( 2011 ) social control and the social contract : the emergence of sanctioning systems for collective action .dyn games appl 1:149171 .doi:10.1007/s13235 - 010 - 0001 - 4 traulsen a , rhl t , milinski m ( 2012 ) an economic experiment reveals that humans prefer pool punishment to maintain the commons .proc r soc b 279:37163721 .doi:10.1098/rspb.2012.0937 yamagishi t ( 1986 ) the provision of a sanctioning system as a public good .j pers soc psychol 51:110116 .doi:10.1037/0022 - 3514.51.1.110 perc m ( 2012 ) sustainable institutionalized punishment requires elimination of second - order free - riders .sci rep 2:344 .doi:10.1038/srep00344 cressman r , song jw , zhang by , tao y ( 2011 ) cooperation and evolutionary dynamics in the public goods game with institutional incentives .j theor biol .doi:10.1016/j.jtbi.2011.07.030 baldassarri d , grossman g ( 2011 ) centralized sanctioning and legitimate authority promote cooperation in humans .proc nat acad sci usa 108:1102311026 .doi:10.1073/pnas.1105456108 isakov a , rand dg ( 2012 ) the evolution of coercive institutional punishment .dyn games appl 2:97109 .doi:10.1007/s13235 - 011 - 0020 - 9 andreoni j , gee lk ( 2012 ) gun for hire : delegated enforcement and peer punishment in public goods provision .j public econ 96:10361046 .doi:10.1016/j.jpubeco.2012.08.003 sasaki t , unemi t ( 2011 ) replicator dynamics in public goods games with reward funds .j theor biol 287:109114 .doi:10.1016/j.jtbi.2011.07.026 hofbauer j , sigmund k ( 1998 ) evolutionary games and population dynamics . cambridge university press , cambridge , uk sugden , r ( 1986 ) the economics of rights , co - operation and welfare .blackwell , oxford , uk kim y ( 1996 ) equilibrium selection in -person coordination games .games econ behav 15:203227 .doi:10.1006/game.1996.0066 hofbauer j ( 1999 ) the spatially dominant equilibrium of a game .ann oper res 89:233251 .doi:10.1023/a:1018979708014 goyal s , vega - redondo f ( 2005 ) network formation and social coordination .games econ behav 50:178207 .doi:10.1016/j.geb.2004.01.005 kandori m , mailath g , rob r ( 1993 ) learning , mutation , and long - run equilibria in games .econometrica 61:2956 .doi:10.2307/2951777 young ph ( 1993 ) the evolution of conventions .econometrica 61:5784 .doi:10.2307/2951778 ellison g ( 2000 ) basins of attraction , long - run stochastic stability , and the speed of step - by - step evolution .rev econ stud 67:1745 .doi:10.1111/1467 - 937x.00119 harsanyi jc , selten r ( 1988 ) a general theory of equilibrium selection in games . mit press , cambridge ,ma oechssler j ( 1997 ) decentralization and the coordination problem .j econ behav organ 32:119135 . doi:10.1016/s0167 - 2681(96)00022 - 4 colman am ( 2006 ) the puzzle of cooperation .nature 440:744745 .doi:10.1038/440744b herrmann b , thni c , gchter s ( 2008 ) antisocial punishment across societies .science 319:13621367 .doi:10.1126/science.1153808 rand dg , armao jj , nakamaru m , ohtsuki h ( 2010 ) anti - social punishment can prevent the co - evolution of punishment and cooperation .j theor biol 265:624632 .doi:10.1016/j.jtbi.2010.06.010 rand dg , nowak ma ( 2011 ) the evolution of antisocial punishment in optional public goods games .nature communications 2:434 . doi:10.1038/ncomms1442
|
rewards and penalties are common practical tools that can be used to promote cooperation in social institutions . the evolution of cooperation under reward and punishment incentives in joint enterprises has been formalized and investigated , mostly by using compulsory public good games . recently , sasaki _ et al . _ ( 2012 , proc natl acad sci usa 109:11651169 ) considered optional participation as well as institutional incentives and described how the interplay between these mechanisms affects the evolution of cooperation in public good games . here , we present a full classification of these evolutionary dynamics . specifically , whenever penalties are large enough to cause the bi - stability of both cooperation and defection in cases in which participation in the public good game is compulsory , these penalties will ultimately result in cooperation if participation in the public good game is optional . the global stability of coercion - based cooperation in this optional case contrasts strikingly with the bi - stability that is observed in the compulsory case . we also argue that optional participation is not so effective at improving cooperation under rewards .
|
different issues have been raised in the context of the net neutrality debate . for tim berners - lee , it means that `` if i pay to connect to the net with a certain quality of service , and you pay to connect with that or greater quality of service , then we can communicate at that level . '' for tim wu , the main idea is that `` a maximally useful public information network aspires to treat all content , sites , and platforms equally . ''according to , it `` usually means that broadband service providers charge consumers only once for internet access , do not favor one content provider over another , and do not charge content providers for sending information over broadband lines to end users . ''these definitions raise different questions , including connectivity , non - discrimination of application , based on type or origin , and network access pricing .net neutrality is a subject involving a range of issues regarding the regulation of public networks : ( a ) content neutrality , ( b ) blocking and rerouting , ( c ) denying ip - network interconnection , ( d ) network management , and ( e ) premium service fees .( b ) pertains to providers discriminating packets in favor of their own or affiliated content , while ( c ) is related to agreements between last - mile and backbone providers .( d ) has been a central argument for isps protesting the enforcement of net neutrality principles : they defend their right to manage their own networks , especially in order to deal with congestion issues ( due to high - volume peer - to - peer ( p2p ) traffic , see the `` comcast v. the fcc '' decision ) .they claim that regulations would act as a disincentive for capacity expansion of their networks . in this paper ,considering usage - based revenues , we address issues from topic ( a ) : side payments among providers and application neutrality .massive copyright infringements led copyright holders to seek remuneration from isps , while congestion due to p2p file sharing led some providers to adopt not application - neutral policies ( comcast throttling bittorrent traffic ) and to consider usage pricing ( as a congestion penalty , for overage of a monthly quota , or for premium service , ) . in what follows ,we study side payments ( from internet service ( access ) providers ( isps ) to content providers ( cps ) , or in the reverse direction ) and consider the impact of not application - neutral pricing independent of congestion .that is , we assume consumers are , to some extent , willing to pay usage - dependent fees .providers are then competing to settle on their usage - based prices , their goal being to maximize revenues coming from these charges .note that a null price in the following does not mean a provider has no income , but rather that all their monthly revenues come from flat - rate priced service components .study of the flat - rate regime is , however , out of the scope of this paper .see , for a comparison of both regimes for a simple model of congestion management .the rest of the paper is organized as follows .we discuss related work in subsection [ subsec : related ] and describe our problem framework in section [ sec : setup ] . in section [ sec : side ] , we study the impact of side payments on the competition between providers .we extend our framework in section [ sec : app ] to analyze the effect of not application - neutral pricing by the isps .we conclude in section [ sec : conclusion ] and discuss future work .previously , we considered certain net neutrality related issues like side payments and premium service fees ( e ) , limiting our consideration to monopolistic providers . in the following , we extend this model to include competition between multiple identical providers ( actually based on an idea sketched in section iv of ) .the validity of the isps argument that net neutrality is a disincentive for bandwidth expansion has been studied in . in the proposed framework , incentives for broadband providers to expand infrastructure capacity turned out to be higher under net neutrality , with isps tending to under- or over - invest in the non - neutral regime .ma advocate the use of shapley values as a fair way to share profits between providers .this approach yields pareto optimality for all players , and expects in particular cps , many of whom receive advertising revenues , to take part in network - capacity investments . however , this approach is coalitional and there are many obstacles to its real - life implementation . deals with the question of side payments and deploys a framework in which cps can subsidize consumers connectivity costs .the authors compare an unregulated regime with a `` net neutral '' one where restrictions apply on the maximum price isps can charge content providers .they find out that , even in the neutral case , cps can benefit from sharing revenue from end users if the latter are sufficiently price sensitive ( and the cost of connectivity is low enough ) .their framework is insightful , but does not take cp revenues into consideration . in ,the authors address whether local isps should be allowed to charge remote cps for the `` right '' to reach their end users ( again , this is the side payment issue ) . through study of a two - sided market, they determine when neutrality regulations are harmful depending on the parameters characterizing advertising rates and consumer price sensitivity .we study a similar issue in section [ sec : side ] , yet with a significantly different scenario . in ,the isps invest in network infrastructure , and then the cps invest depending on the quality of the resulting network .the difference in time and scale of these investments justifies a leader - follower dynamics .now in our model we suppose that the network is already deployed and that providers are setting up usage - based pricing to leverage ongoing revenues .for example , our scenario could be at&t beginning to charge google would like to do is use my pipes free , but i ai nt going to let them do that because we have spent this capital and we have to have a return on it . '' ] and its customers on a usage - based basis , while the leader - follower scenario would be closer to isps investing in optical fiber connections and high - quality video - on - demand providers coming to the new market .otherwise , both models share some assumptions , including a fixed number of players , homogeneous providers , and a uniform distribution of consumers among providers once the price war has ended .however , our cps revenues come from usage - based fees rather than advertising how to take both into account . ] , and our content consumption model is different : users subscribe to one cp and get all their content from him. this could be the case , with online newspapers , music stores , video on demand , though this setting is more restrictive than a network where users are willing to access all cps , it is of practical use and fits more to the homogeneity assumption .the net neutrality debate is discussed in in light of historical precedents , especially dealing with the question of price discrimination .a conclusion about the way customers value the network is that connectivity is far more important than content .our model encompasses three types of players : the internauts ( end users ) , modeled collectively by their demand response , last - mile isps , and cps .consumers pay providers usage - dependent fees for service and content that requires one isp and one cp .providers then compete in a game to settle on their usage - based prices , which may turn out to be 0 ] ; * or : comes from ( [ u_from_a ] ) and ( [ abs - s ] ) ; * : means that . replacing by expression ( [ abs - s ] ) yields , which is impossible when . * : suppose and . from expression ( [ abs - s ] ) , we then have which is impossible .similarly , and would imply . according to ( [ abs - s ] ) , can not be greater than the maximum value of on ] yields two possible values for , which in turn give two set of prices corresponding to positive equilibrium demand and revenues .there are two solutions to ( [ side - equ1 ] ) and ( [ side - equ2 ] ) . for any solution to this system , given that for ( in a valid solution , players paying side payments can not get negative revenues ) . thus ,both the critical points we computed are also interior nash equilibrium points ( neps ) .generally , additional neps may exist on the boundary of the play - action space .demand and revenues at and are shown in figures [ fig : side - nep1 ] and [ fig : side - nep2 ] .note that * is consistent with the results of the non - discriminating setting : when , , and for . otherwise , has a rather unexpected impact on equilibrium revenues : more side payments yield _ decreased _ revenues for those who receive them .* does not exist when ( there is a discontinuity in equilibrium prices at this point ) .again , providers receiving side payments eventually get much less revenues than the others . yet , unlike , here their revenues increase with .both interior neps share the same `` paradox '' : providers receiving side payments eventually achieve _ less _ revenue than the others . herewe take ( the roles of isps and cps are swapped for ) . assume all providers act independently under a best - response behavior .thus , the vector field is an appropriate indicator of the aggregate `` trends '' of the system , see figure [ fig : side - quiver ] .so , if , the system is attracted by ; otherwise , unless is precisely equal to , the system is attracted to the boundary ( where usage - based revenues for isps come only from side payments ) , is an unstable ( saddle ) point . and .,scaledwidth=65.0% ] solving ( [ side - equ2 ] ) with yields where corresponding expressions for demand and revenues follow directly . at the boundary , demand is higher than at or while isp revenues turn out to be lower ( and cp revenues higher ) than at ( see figure [ fig : side - bep ] ) ..,scaledwidth=60.0% ] when there are the same number of cps and isps , necessary conditions for a nep can be written as follows ( using the change of variables of section [ subsec:22 ] ) : this system is tractable for any .we computed its solutions for all and observed that the conclusions of the study above also hold for greater values of , * there are two interior neps : one ( ) is repulsive ( the aggregate behavior of one group of providers is to avoid it ) while the other one ( ) is attractive ; * if side payment receivers have a mean price , the system will converge to ; otherwise , it will go to a border equilibrium where they have a null usage - based price ; * equilibrium demand and revenue curves share the shape of those on figures [ fig : side - nep1 ] and [ fig : side - nep2 ] , scaled by a factor depending on .additional details can be found at .the number of providers impacts the maximum value of ( see theorem [ thm : side - threshold ] ) and scales equilibrium prices , demand and revenues .sample figures are given in table [ table : side - factors ] ..impact of on equilibria properties . [ cols="^,^,^,^,^,^",options="header " , ] for reasonable values of the number of competing providers , we saw that the introduction of side payments in the model yielded a game with two possible outcomes : * if initial prices of side payment receivers are high enough , providers will reach an equilibrium where receivers get less revenue than payers ( the higher the side payments , the lower the revenue ) . yet, this is the best compromise for them . *otherwise , receivers will be constrained into setting their usage - based fees to zero , depending only on side payments for their usage - based revenues .this is the worst solution for them . in both cases, the paradox of side payments is that they act as a _ handicap _ for those who receive them .now , let us consider to what extent isps should be allowed to perform price discrimination depending on the application in use ( video chat , media streaming ) ? in this section , we study the impact of such discrimination in a configuration with two crude example types of applications : web surfing and p2p file sharing .we extend our model to a setting with three types of providers : 1 .isps , providing last - mile access to the internauts , 2 .web content providers ( web cps ) , search engine portals ( recall all providers of any given type are deemed identical , so we assume all web cps provide the same type of client - server http content as well ) , and 3 .p2p content providers ( p2p cps ) , private p2p networks operated in cooperation with copyright holders .users choose an isp , a web cp and a p2p cp .to access web ( resp .p2p ) content , they pay usage - based fees to both their isp and their web cp ( resp .these groups are not coalitions : in a group , each provider acts independently to maximize their own revenue . in a neutral setting ,the isp charges a single price for all types of traffic , while otherwise it may set up two different prices and for http and p2p traffic respectively .denote by ( resp . ) the usage - based price charged by the web cp ( resp .we introduce two separate demand - response profiles for the two types of content : when isp , web cp and p2p cp are chosen , demands for http and p2p content are , respectively , with in the neutral setting . as previously ,define .the portion of users committed to the provider of the group is still modeled as ( [ eqn : sigma ] ) ; we will see in [ app - non - neutral ] how to generalize this to isps charging two different prices instead of one .revenues for isp , web cp and p2p cp are given by finally , we define the normalized sensitivity to usage - based pricing and the maximum prices ratio , and make the following assumptions : * : consumers are more sensitive to usage - based pricing for web content than for file sharing . * : customers are ready to pay more for content exchanged on p2p sharing systems ( movies , music , _ etc ._ ) than for web pages . now assume there is only one isp , one web cp and one p2p cp .this is the case when , in any group , either there is no competition or all providers have decided to form a coalition .closed solutions are easy to derive here . in the non - neutral setting ,the isp plays two independent `` isp vs. cp '' games and the equilibrium is thus given by for , with .see for further discussion of the `` isp vs. cp '' game . in the neutral setting , the isp has to find a compromise between the two applications , which is at the nep let us define and the harmonic mean of and .equilibrium prices set by the two other providers are neutrality yields lower demand for web content and higher demand for file sharing : equilibrium revenue for the isp is less than in the non - neutral setting .other players revenues are given by these expressions show that , while the p2p cp is better off in a neutral setting , both the isp and the web cp prefer the non - neutral configuration ( see figure [ fig : app - monop ] ) . and).,scaledwidth=60.0% ] also , an interesting fact to point out is that can not be too high : [ thm : app - monop - neutral ] in the neutral setting with monopolistic providers , there is an interior nep iff ( [ app - monop - bound ] ) is equivalent to .if it does not hold , the web cp will have no choice but to set his usage - based price to zero , thus opting out of the game .this condition is a consequence of the `` compromise '' sought by the isp : if , equation ( [ app - monop - p1 ] ) tells us will be greater than the maximum price consumers are ready to pay for web content . hereconsider the setting of non - monopolistic providers ( for ) where application neutrality is enforced . in particular , .utilities derivatives are then : u_{1i } , \\ { \frac{\partial{u_{ki}}}{\partial{p_{ki}}}}(\bp_1 , \bp_2 , \bp_3 ) & = & \left[\frac{1}{n_k \bp_k } - \frac{1}{\wtd_k}\right ] u_{ki } \textrm { for } k=2,3 , \end{aligned}\ ] ] where for .therefore , a nep must be solution to the linear system : whose resolution is straightforward . for any solution of this system , ensuring that is indeed a nep .when application non - neutral pricing is allowed , the isps utility is , where refers to the portion of users gathered by isp given his prices and .there are different ways to generalize equation ( [ eqn : sigma ] ) to multiple criteria : one could apply to the mean price or model as a convex combination of and .we chose where .that is , we apply the original stickiness model ( [ eqn : sigma ] ) to a combined price defined as a convex combination of and .this choice , particularly the geometric mean in ( [ app - non - neutral - sigma ] ) , is motivated by the following considerations : still satisfies the properties expected for a stickiness function ( see section [ sec : setup ] ) ; the weight of in the combination is increasing in and , and similarly the weight of is increasing in and ; and the resulting model is solvable in closed form . in this model ,utilities derivatives for cps are the same as in [ app - neutral ] ( regulations only affect the isps ) while u_{1i } , \\ { \frac{\partial{u_{1i}}}{\partial{p_{13,i } } } } & = & \left[\frac{(1-\alpha)\,(\wtd_3 - \bp_{13})}{\alpha \wtd_2 \bp_{12 } + ( 1-\alpha ) \wtd_3 \bp_{13 } } - \left(1-\frac{1}{n_1}\right ) \frac{1-\sqrt{\alpha\gamma}}{\wtp_{1i}}\right ] u_{1i}. \end{aligned}\ ] ] thus , any nash equilibrium satisfies : where .this system can be rewritten as two polynomial equations in and which are solvable in closed form .computations yield a single admissible solution here , for which and similarly for with instead of . as in [ app - neutral ] , and are negative as well , ensuring that is indeed a nash equilibrium of the game . in most of our numerical experiments, we compared revenues at this nep with those of the neutral scenario for and .we used sage for our computations , and all our scripts are available at .first , the trend we observed in subsection [ subsec : app - monopolistic ] ( isps and web cps prefer the non - neutral setting while p2p cps benefit from neutrality regulations ) also holds when the model encompasses competition ( see figure [ fig : app - competitive - revenues ] ) .is the number of providers of each type , and ) .plain ( resp .hatched ) columns correspond to neutral ( resp .non - neutral ) settings.,scaledwidth=75.0% ] the impact of non - neutral pricing on providers revenues varies with competition : increased competition brings less benefit for web cps and less loss for p2p cps . yet, competition has almost no effect on the gains of isps ( see figure [ fig : app - competitive - relvar ] ) .is the number of providers of each type , and .,scaledwidth=75.0% ] we also observed a maximum price gap , as we did in subsection [ subsec : app - monopolistic ] with theorem [ thm : app - monop - neutral ] , which decreases when competition increases .we presented an idealized framework to study the impact of two net - neutrality related issues , side payments and application neutrality , on the interactions among end users , isps and cps .our revenue model relied on a simple , common linear demand response to usage - based prices , and it accounted for customer loyalty .we studied the effect of regulated side payments between the isps and cps .the two possible outcomes of the competition both showed the same paradox : side payments are actually a handicap for those who receive them insofar as they reduce nash equilibrium revenues .we also studied the issue of application neutrality in a simple setting involving two types of content , web content and file sharing , the latter showing lower price sensitivity and higher willingness to pay under our assumptions of relative demand sensitivity to price .our analysis suggested that isps and web cps benefit from application non - neutral practices , while providers that enable content dissemination by p2p means are better off in a neutral setting .our current framework does not take into account additional advertising revenues that cps may receive , to lower their usage - based prices , or even deliver their content for free .there are multiple ways to remediate this . in , we chose to add a fixed parameter , so that .the underlying hypothesis is that is the outcome of a separate game played between advertisers and content providers .however , advertisers willingness to pay may also depend on consumers demand , and is likely to increase significantly when targeted advertising becomes possible . one way to remediate the shortcomings of fixed to add advertisers to the game , reacting to prices set by cps in a way similar to consumers , with linear demand - response and advertiser stickiness .their demand should be increasing with consumers demand , and decreasing with , ( cps revenues being ) .also , when there are sundry types of applications , their respective levels of targeted advertising could be taken into account with depending on application .though harder to solve , such a system would encompass another major component of the internet economy .e. altman , p. bernhard , s. caron , g. kesidis , j. rojas - mora and s. wong , `` a study of non - neutral networks with usage - based prices '' , in _ proc .workshop on economic traffic management ( etm ) _ , amsterdam ,sept . 6 , 2010 .see also http://arxiv.org/abs/1006.3894 r.t.b .chiu , j.c.s .lui , v. misra , and d. rubenstein , `` interconnecting eyeballs to content : a shapley value perspective on isp peering and settlement '' , _ proc .intl workshop on economics of networked systems _61 - 66 , 2008 .j. musacchio , g. schwartz and j. walrand , `` a two - sided market analysis of provider investment incentives with an application to the net - neutrality issue '' , _ review of network economics _ , 2009 , vol . 8 , issue 1 .
|
the ongoing debate over net neutrality covers a broad set of issues related to the regulation of public networks . in two ways , we extend an idealized usage - priced game - theoretic framework based on a common linear demand - response model . first , we study the impact of `` side payments '' among a plurality of internet service ( access ) providers and content providers . in the non - monopolistic case , our analysis reveals an interesting `` paradox '' of side payments in that overall revenues are reduced for those that receive them . second , assuming different application types ( http web traffic , peer - to - peer file sharing , media streaming , interactive voip ) , we extend this model to accommodate differential pricing among them in order to study the issue of application neutrality . revenues for neutral and non - neutral pricing are compared for the case of two application types .
|
uncovering rules governing collective human behavior is a difficult task because of the myriad of factors that influence an individual s decision to take action .investigations into the timing of individual activity , as a basis for understanding more complex collective behavior , have reported statistical evidence that human actions range from random to highly correlated . while most of the time the aggregated dynamics of our individual activities create seasonal trends or simple patterns , sometimes our collective action results in blockbusters , best - sellers , and other large - scale trends in financial and cultural markets . here , we attempt to understand this nontrivial herding by investigating how the distribution of waiting times describing individuals activity is modified by the combination of interactions and external influences in a social network .this is achieved by measuring the response function of a social system and distinguishing whether a burst of activity was the result of a cumulative effect of small endogenous factors or instead the response to a large exogenous perturbation .looking for endogenous and exogenous signatures in complex systems provides a useful framework for understanding many complex systems and has been successfully applied in several other contexts . as an illustration of this distinction in a social system ,consider the example of trends in queries on internet search engines in figure [ fig : google ] , which shows the remarkable differences in the dynamic response of a social network to major social events .for the `` exogenous '' catastrophic asian tsunami of december 26th , 2004 , we see the social network responded suddenly . in contrast , the search activity surrounding the release of a harry potter movie has the more `` endogenous '' signature generated by word - of - mouth , with significant precursory growth and an almost symmetric decay of interest after the release . in both `` endo '' and `` exo '' casesthere is a significant burst of activity .however , we expect to be able to distinguish the post - peak relaxation dynamics on account of the very different processes that resulted in the bursts .furthermore , we expect the relaxation process to depend on the interest of the population since this will influence the ease with which the activity can be spread from generation to generation . to translate this qualitative distinction into quantitative results , we describe a model of epidemic spreading on a social network and validate it with a data set that is naturally structured to facilitate the separation of this endo / exo dichotomy . our data consists of nearly 5 million time - series of human activity collected sub - daily over 8 months from the 4th most visited web site ( youtube ) .at the simplest level , viewing activity can occur one of three ways : randomly , exogenously ( when a video is featured ) , or endogenously ( when a video is shared ) .this provides us with a natural laboratory for distinguishing the effects that various impacts have and allows us to measure the social `` response function '' .various factors may lead to viewing a video , which include chance , triggering from email , linking from external websites , discussion on blogs , newspapers , and television , and from social influences .the epidemic model we apply to the dynamics of viewing behavior on youtube uses two ingredients whose interplay capture these effects .the first ingredient is a power law distribution of waiting times describing human activity that expresses the latent impact of these various factors using a response function which , on the basis of previous work , we take to be a long - memory process of the form by definition , the memory kernel describes the distribution of waiting times between `` cause '' and `` action '' for an individual .the `` cause '' can be any of the above mentioned factors .the action is for the individual to view the video in question after a time since she was first subjected to the `` cause '' _ without _ any other influences between and , corresponding to a direct ( or first - generation ) effect . in other words , is the `` bare '' memory kernel or propagator , describing the direct influence of a factor that triggers the individual to view the video in question . here , the exponent is the key parameter of the theory which will be determined empirically from the data .the second ingredient is an epidemic branching process that describes the cascade of influences on the social network .this process captures how previous attention from one individual can spread to others and become the cause that triggers their future attention . in a highly connected network of individualswhose interests make them susceptible to the given video content , a given factor may trigger action through a cascade of intermediate steps .such an epidemic process can be conveniently modeled by the so - called self - excited hawkes conditional poisson process .this gives the instantaneous rate of views as where is the number of potential viewers who will be influenced directly over all future times after by person who viewed a video at time .thus , the existence of well - connected individuals can be accounted for with large values of .lastly , is the exogenous source , which captures all spontaneous views that are not triggered by epidemic effects on the network . according to our model, the aggregated dynamics can be classified by a combination of the type of disturbance ( endo / exo ) and the ability of individuals to influence others to action ( critical / sub - critical ) , all of which is linked by a common value of .the following classification of behaviors emerges from the interplay of the bare long - memory kernel given by ( [ eq : memorybare ] ) and the epidemic influences across the network modeled by the hawkes process ( [ eq : intensity ] ) * * exogenous sub - critical*. when the network is not `` ripe '' ( that is , when connectivity and spreading propensity are relatively small ) , corresponding to the case when the mean value of is less than , then the activity generated by an exogenous event at time does not cascade beyond the first few generations , and the activity is proportional to the direct ( or `` bare '' ) memory function : * * exogenous critical*. if instead the network is `` ripe '' for a particular video , i.e. , is close to 1 , then the bare response is renormalized as the spreading is propagated through many generations of viewers influencing viewers influencing viewers , and the theory predicts the activity to be described by : * * endogenous critical*. if in addition to being `` ripe '' , the burst of activity is not the result of an exogenous event , but is instead fueled by endogenous ( word - of - mouth ) growth , the bare response is renormalized giving the following time - dependence for the view count before and after the peak of activity : * * endogenous sub - critical*. here the response is largely driven by fluctuations , and not bursts of activity .we expect that many time - series in this class will obey a simple stochastic process . dynamics described by the above classifications are illustrated in figure [ fig : predictions ] .in addition to these classes , the model predicts , by construction , a relationship between the fraction of views obtained on the peak day compared to the total cumulative views ( figure [ fig : predictions ] : inset ) . forthe * exogenous sub - critical * class , the absence of precursory growth and fast relaxation following a peak imply that close to 100% of the views are contained in the peak .for the * exogenous critical * class , the fractional views in the peak should be smaller than the previous case on account of the content penetrating the network resulting in a slower relaxation .finally , for the * endogenous critical * class , significant precursory growth followed by a slow decay imply that the fractional weight of this peak is very small compared to the total view count .this simple observation will be used as the basis for grouping exponents .we find that most videos dynamics ( % ) either do not experience much activity or can be statistically described as a random process ( using a poisson process and chi - squared test ) .this is not inconsistent with the endo - subcritical classification .for the other 10% ( videos ) we find nontrivial herding behavior which accurately obeys the three power - law relations described above .characteristic examples of endogenous and exogenous dynamics are shown in figure [ fig : endoexo ] .for these videos that experience bursts , we calculate the fraction of views on the most active day compared to the total count , and define three classes : 1 .class 1 is defined by .class 2 is defined by .class 3 is defined by .should our model have any informative power , we should find a correspondence based on this simple classification , we show in figure [ fig : histogram ] the histogram of exponents characterizing the power law relaxation .the exponents in the various classes cluster into groups with the most probable exponent in each class given respectively by , , and .these results are robust with respect to changes of the threshold percentages .these values are compatible with the predictions ( [ eq : bare ] ) , ( [ eq : exc ] ) , ( [ eq : enc ] ) of the epidemic model with a unique value of .having empirically extracted a value of , a further test of the model is provided by asking if the dynamics of videos with these exponents are consistent with the description of the model . here, the test of the epidemic model lies in the precursory dynamics .we check this by performing a peak - centered , aggregate sum for all videos with exponents near either 1.4 , 0.6 , or 0.2 , with the intent of visualizing the time evolution .each time - series is first normalized to 1 to avoid a single video from dominating the sum , and the final result is divided by the number of videos in each set so we can compare the three classes .the model predicts , and we indeed observe in figure [ fig : aggregate ] , that videos whose post - peak dynamics is governed by small exponents have significantly more precursory growth .one also sees very little precursory growth for the two exogenous classes .since by construction our grouping was based on the exponent characterizing the relaxation after the peak , one is not surprised to visualize faster decays for those videos with exponents near 1.4 compared with those of 0.6 and 0.2 .these results provide direct evidence that collective human dynamics can be robustly classified by epidemic models , and understood as the transformation of the distribution of individual waiting times by exogenous and endogenous forces .one of the surprising results is that the various classes are related by a common value of .while it is not expected that is universal in human systems , it is similar to what has been found in other studies .this provides a possible measure of the strength of interactions in a social network , and will be the subject of future work .in addition to these results , understanding collective human dynamics opens the possibility for a number of tantalizing applications .it is natural to suggest a qualitative labeling that is quantitatively consistent with the three classes derived from the model : viral , quality , and junk videos .viral videos are those with precursory word - of - mouth growth resulting from epidemic - like propagation through a social network and correspond to the endogenous critical class with an exponent ( ) .quality videos are similar to viral videos but experience a sudden burst of activity rather than a bottom - up growth . because of the `` quality '' of their content they subsequently trigger an epidemic cascade through the social network .these correspond to the exogenous critical class , relaxing with an exponent ( ) .lastly , junk videos are those that experience a burst of activity for some reason ( spam , chance , etc ) but do not spread through the social network . therefore their activity is determined largely by the first - generation of viewers , corresponding to the exogenous sub - critical class , and they should relax as ( ) . while one might argue that these labels are inherently subjective, they reflect the objective measure contained in the collective response to events and information .this is further supported by the average number of total views in each class , which is largest for `` viral '' ( 33,693 views ) and smallest for `` junk '' ( 16,524 views ) as one would expect .while the above description applies to videos , one could extend this technique to the realm of books , movies , and other commercial products , perhaps using sales as a proxy for measuring the relaxation of individual activity . the proposed method for classifying contenthas the important advantage of robustness as it does not rely on qualitative judgment , using information revealed by the dynamics of the human activity as the referee .more importantly , the method does not rely on the magnitude of the response because of the scale free nature of the relaxation dynamics .this implies that identification of relevance or lack of relevance can be made for content that has mass - appeal , along with that which appeals to more specialized communities .furthermore , this framework could be used to provide a quantitative measure of the effectiveness of marketing campaigns by measuring the sales response to an `` exogenous '' marketing event .a tenant of complex systems theory is that many seemingly disparate and unrelated systems actually share an underlying universal behavior . in the digital age, we now have access to unprecedented stores of data on human activity .this data is usually almost trivial to acquire in both time and money when compared with traditional measurements .if the complex behavior in social systems is shared by other complex systems , then our approach , which disentangles the individual response from the collective , may provide a useful framework for the study of their dynamics .
|
we study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time - series of daily views for nearly 5 million videos on youtube . we find that most activity can be described accurately as a poisson process . however , we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power - law relaxation governing the timing of views . we find that these relaxation exponents cluster into three distinct classes , and allow for the classification of collective human dynamics . this is consistent with an epidemic model on a social network containing two ingredients : a power law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions . this model is a conceptual extension of the fluctuation - dissipation theorem to social systems , and provides a unique framework for the investigation of timing in complex systems .
|
we consider the transmission of a memoryless bivariate gaussian source over an average - power - constrained one - to - two gaussian broadcast channel .the transmitter observes the source and describes it to the two receivers by means of an average - power - constrained signal .each receiver observes the transmitted signal corrupted by a different additive white gaussian noise and wishes to estimate the source component intended for it .that is , receiver 1 wishes to estimate the first source component and receiver 2 wishes to estimate the second source component .our interest is in the pairs of expected squared - error distortions that are simultaneously achievable at the two receivers .we prove that an uncoded transmission scheme that sends a linear combination of the source components achieves the optimal power - versus - distortion trade - off whenever the signal - to - noise ratio is below a certain threshold .the threshold is a function of the source correlation and the distortion at the receiver with the weaker noise .this result is reminiscent of the results in about the optimality of uncoded transmission of a bivariate gaussian source over a gaussian multiple - access channel , without and with feedback .there too , uncoded transmission is optimal below a certain snr - threshold .this work is also related to the classical result of goblick , who showed that for the transmission of a memoryless gaussian source over the additive white gaussian noise channel , the minimal expected squared - error distortion is achieved by an uncoded transmission scheme .it is also related to the work of gastpar who showed for some combined source - channel coding analog of the quadratic gaussian ceo problem that the minimal expected squared - error distortion is achieved by an uncoded transmission scheme .our setup is illustrated in figure [ fig : bc - basic ] .[ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc]source [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] it consists of a memoryless bivariate gaussian source and a one - to - two gaussian broadcast channel .the memoryless source emits at each time a bivariate gaussian of zero mean and covariance matrix , i.e. , that and that will be justified in remark [ rmk : source ] , once the problem has been stated completely . ] the source is to be transmitted over a memoryless gaussian broadcast channel with time- input , which is subjected to an expected average power constraint for some given .the time- output at receiver is given by where is the time- additive noise term on the channel to receiver .for each the sequence is independent identically distributed ( iid ) and independent of the source sequence , where denotes the mean- variance- gaussian distribution and where we assume that is equivalent to the problem of sending a bivariate gaussian on a single - user gaussian channel . ] for the transmission we consider block encoding schemes where , for blocklength ,the transmitted sequence is given by for some encoding function , and where we use boldface characters to denote -tuples , e.g. .receiver s estimate of the source sequence intended for it , is a function of its observation , the quality of the estimate with respect to the original source sequence is measured in expected squared - error distortion averaged over the blocklength .we denote this distortion by , i.e. our interest is in the set of distortion pairs that can be achieved simultaneously at the two receivers as the blocklength tends to infinity . this notion of achievabilityis described more precisely in the following definition .[ df : achievability ] given , , and , we say that the tuple is _ achievable _ ( or in short , that the pair is achievable ) if there exist a sequence of encoding functions as in satisfying the average power constraint and sequences of reconstruction functions , as in with resulting average distortions , as in that fulfill whenever for an iid sequence of zero - mean bivariate gaussians with covariance matrix as in and iid zero - mean gaussians of variance , .based on definition [ df : achievability ] , we next define the set of all achievable distortion pairs . for any , , , , and as in definition [ df : achievability ] we define ( or just ) as the region of all pairs for which is achievable , i.e. l ( ^2,,p , n_1,n_2 ) = \ { ( d_1,d_2 ) : ( d_1,d_2,^2,,p , n_1,n_2 ) } .+ [ rmk : prpty - d ] the region is closed and convex .see appendix [ apx : prf - d - closed ] .[ rmk : source ] in the description of the source law in , we have excluded the case where .we have done so because for this case the optimality of uncoded transmission follows immediately for all snrs from the corresponding result for the single user scenario in .moreover , we have also assumed that the source components are of equal variance and that their correlation coefficient is nonnegative .we now show that these two assumptions incur no loss in generality .we can limit ourselves to nonnegative correlation coefficients because the distortion region depends on the correlation coefficient only via its absolute value .that is , the tuple is achievable if , and only if , the tuple is achievable . to see this , note that if achieves the distortion for the source of correlation coefficient , then , where achieves for the source with correlation coefficient .the restriction to source components of equal variances incurs no loss of generality because the distortion region scales linearly with the variance of the source components . to see this ,consider the more general case where the two source components are not necessarily of equal variances , i.e. , where and for some and for all .accordingly , define a tuple to be achievable , similarly as in definiton [ df : achievability ] .the proof now follows from showing that the tuple is achievable if , and only if , for every , the tuple is achievable .this can be seen as follows .if achieves the tuple , then where and where achieves the tuple .and by an analogous argument it follows that if is achievable , then also is achievable .+ we state one more property of the region . to this end, we need the following two definitions .[ df : dimin ] we say that is achievable if there exists some such that .the smallest achievable is denoted by .the achievability of and the distortion are analogously defined . by the classical single - user result ( * ? ? ?* theorem 9.6.3 , p. 473 ) [ df : di - star ] for every achievable , we define as the smallest such that is achievable , i.e. , similarly , in general , we have no closed - form expression for and .however , in the following two special cases we do : [ prp : d1star - of - d2min ] the distortion is given by the distortion pair is achieved by setting .see appendix [ apx : prf - d1star - of - d2min ] .[ prp : d2star - of - d1min ] the distortion is given by the distortion pair is achieved by setting .the value of follows from theorem [ thm : bc_main ] ahead as follows : for it can be verified that condition of theorem [ thm : bc_main ] is satisfied for all .hence , the pair is always achieved by the uncoded scheme with , , and so rcl d_2^(d_1,min ) & = & ^2 .( this remark will not be used in the proof of theorem [ thm : bc_main ] . )our main result states that , below a certain snr - threshold , every pair can be achieved by an uncoded scheme , where for every time - instant , the channel input is of the form for some . the estimate of ( at receiver ) , , is the minimum mean squared - error estimate of based on the scalar observation , i.e. , we denote the distortions resulting from this uncoded scheme by and .they are given by rcl d_1^u ( , ) & = & ^2 , + & & + d_2^u ( , ) & = & ^2 .+ & & [ eq : expr - d2u ] in the reminder , we shall limit ourselves to transmission schemes with ] , then the nonnegativity follows directly from and from the fact that . otherwise , if ] . herewe apply this with correspondig to , with corresponding to the rhs of , and with .this can be proved using ( * ? ? ?* corollary 24.2.1 and theorem 24.1 ) . ]it remains to prove that it suffices to limit ourselves to pairs that are achievable by coding schemes that achieve with equality . to this end, we first note that by definition [ df : di - star ] it suffices to prove lemma [ lm : prf - main - thm ] for pairs where satisfies and .the proof now follows by lemma [ lm : boundary - equality ] which states that for such pairs any sequence of schemes achieving must achieve with equality .lemma [ lm : prf - main - lm ] relates the two reconstruction fidelities and . the difficulty in doing so is that if we consider a scheme achieving some at receiver 2 , then we can only derive bounds on entropy expressions that are conditioned on .however , for a lower bound on we would typically like to have an upper bound on , or ( without conditioning on . ) to overcome this difficulty , we furnish receiver 1 with as side - information , and then prove lemma [ lm : prf - main - lm ] using lemma [ lm : bc_cond - mut - info ] and the following upper bound . which holds for every because the scaled sequence is a valid estimate of at receiver 1 .the desired bound now follows by evaluating the lhs of this inequality for the choice of denote by the least distortion that can be achieved on at receiver 1 when is provided as side - information .the proof follows from a lower bound on as a function of and from an upper bound on as a function of .we first derive the lower bound on . to this end , let denote the rate - distortion function on when is given as side - information to both , the encoder and the decoder .thus , for every , since receiver 1 is connected to the transmitter by a point - to - point link , the lower bound on now follows from upper bounding the rhs of by means of lemma [ lm : bc_cond - mut - info ] , and rewriting the lhs of using .this yields we next derive the upper bound on by considering the distortion of a linear estimator of when receiver 1 has as side - information .more precisely , we consider the linear estimator where , as we will see , the coefficients , correspond to those in lemma [ lm : prf - main - thm ] . to analyze the distortion associated with , first note that by the orthogonality condition of is satisfied .since is a valid estimate of at receiver 1 when is given as side - information , we thus obtain rcl[eq : ub - delta1 ] _1^(n ) & & _ k=1^n + & = & ^2 - 2a_1 ( _ k=1^n ) - 2 a_2 ^2 + a_1 ^ 2 ( _ k=1^n ) + & & + 2a_1a_2 ( _ k=1^n ) + a_2 ^ 2 ^2 + & & ^2 - 2a_1 ( ^2-_1^(n ) ) - 2 a_2 ^2 + a_1 ^ 2 ( ^2-_1^(n ) ) + & & + 2a_1a_2 ( _ k=1^n ) + a_2 ^ 2 ^2 , + & & ^2 - a_1 ( ^2-_1^(n))(2-a_1 ) - a_2 ^2 ( 2-a_2 ) + & & + 2a_1a_2 . where in step we have used that the normalized summations over ] are both equal to , which follows by ; and in step we have used lemma [ lm : bc_ub ] and the assumption that .the lower bound on of lemma [ lm : prf - main - lm ] now follows easily : since the rhs of is monotonically decreasing in , combining with gives where we have denoted by the rhs of .we show that for any nonnegative , the achievable distortion is lower bounded by by reduction [ redc : d1-equality ] it suffices to show this for coding schemes , , with and given in and with associated normalized distortions , satisfying where satisfies . by there exists a subsequence , tending to infinity , such that hence , where follows from ; follows from lemma [ lm : prf - main - lm ] ; and follows from and from the continuity of with respect to a continuity which can be argued from as follows .the function depends on only through , and is strictly positive for all and all , and it is continuous in because , by , is continuous in .hence , is continuous in .a. lapidoth and s. tinguely , `` sending a bivariate gaussian source over a gaussian mac with feedback , '' submitted to _ ieee transactions on information theory_. available on ` http://arxiv.org/pdf/0903.3487 ` .m. gastpar , `` uncoded transmission is exactly optimal for a simple gaussian sensor network '' , in _ proceedings information theory and applications workshop _ , san diego , ca , usa , january 29 - february 2 , 2007 .
|
we consider the transmission of a memoryless bivariate gaussian source over an average - power - constrained one - to - two gaussian broadcast channel . the transmitter observes the source and describes it to the two receivers by means of an average - power - constrained signal . each receiver observes the transmitted signal corrupted by a different additive white gaussian noise and wishes to estimate the source component intended for it . that is , receiver 1 wishes to estimate the first source component and receiver 2 wishes to estimate the second source component . our interest is in the pairs of expected squared - error distortions that are simultaneously achievable at the two receivers . we prove that an uncoded transmission scheme that sends a linear combination of the source components achieves the optimal power - versus - distortion trade - off whenever the signal - to - noise ratio is below a certain threshold . the threshold is a function of the source correlation and the distortion at the receiver with the weaker noise . [ multiblock footnote omitted ]
|
due to the increasing number of cpu cores , exploiting possible speedups by parallel computations is nowadays more important than ever .parallel evolutionary algorithms ( eas ) form a popular class of heuristics with many applications to computationally expensive problems .this includes _ island models _ , also called _ distributed eas _ , _ multi - deme eas _ or _ coarse - grained eas_. evolution is parallelized by evolving subpopulations , called _ islands _ , on different processors .individuals are periodically exchanged in a process called _ migration _ , where selected individuals , or copies of these , are sent to other islands , according to a migration topology that determines which islands are neighboring .also more fine - grained models are known , where neighboring subpopulations communicate in every generation , first and foremost in _ cellular eas _ . by restricting the flow of information through spatial structures and/or infrequent communication , diversity in the whole system is increased .researchers and practitioners frequently report that parallel eas speed up the computation time , and at the same time lead to a better solution quality . despite these successes , a long history and very active research in this area ,the theoretical foundation of parallel eas is still in its infancy .the impact of even the most basic parameters on performance is not well understood .past and present research is mostly empirical , and a solid theoretical foundation is missing .theoretical studies are mostly limited to artificial settings . in the study of _ takeover times_ , one asks how long it takes for a single optimum to spread throughout the whole parallel ea , if the ea uses only selection and migration , but neither mutation nor crossover .this gives a useful indicator for the speed at which communication is spread , but it does not give any formal results about the running time of evolutionary algorithms with mutation and/or crossover .one way of gaining insight into the capabilities and limitations of parallel eas is by means of rigorous running time analysis . by asymptotic bounds on the running timewe can compare different implementations of parallel eas and assess the speedup gained by parallelization in a rigorous manner . in the authors presented the first running time analysis of a parallel evolutionary algorithm with a non - trivial migration topology .it was demonstrated for a constructed problem that migration is essential in the following way .a suitably parametrized island model with migration has a polynomial running time while the same model without migration as well as comparable panmictic populations need exponential time , with overwhelming probability .neumann , oliveto , rudolph , and sudholt presented a similar result for island models using crossover . if islands perform crossover with immigrants during migration , this can drastically speed up optimization .this was demonstrated for a pseudo - boolean example as well as for instances of the vertexcover problem . in this workwe take a broader view and consider the speedup gained by parallelization for various common pseudo - boolean functions and function classes of varying difficulty .a general method is presented for proving upper bounds on the parallel running time of parallel eas .the latter is defined as the number of generations of the parallel ea until a global optimum is found for the first time .this allows us to estimate the speedup gained by parallelization , defined as the ratio of the expected parallel running time of an island model and the expected running time for a single island .it also can be used to determine how to choose the number of islands such that the parallel running time is reduced as much as possible , while still maintaining an asymptotically optimal speedup .our method is based on the _ fitness - level method _ or _ method of -based partitions _ , a simple and well - known tool for the analysis of evolutionary algorithms . the main idea of this method is to divide the search space into sets , strictly ordered according to fitness values of elements therein .elitists eas , i.e. , eas where the best fitness value in the population can never decrease , can only increase their current best fitness .if , for each set we know a lower bound on the probability that an elitist ea finds an improvement , i.e. , for finding a new search point in a new best fitness - level set , this gives rise to an upper bound on the expected running time .the method is described in more detail in section [ sec : preliminaries ] . in section [ sec : general - upper - bounds ] we first derive a general upper bound for parallel eas , based on fitness levels .our general method is then tailored towards different spatial structures often used in fine - grained or cellular evolutionary algorithms and parallel architectures in general : ring graphs ( theorem [ the : method - ring ] in section [ sec : ring ] ) , torus graphs ( theorem [ the : method - torus ] in section [ sec : torus ] ) , hypercubes ( theorem [ the : method - hypercube ] in section [ sec : hypercube ] ) and complete graphs ( theorems [ the : method - completegraph ] and [ the : method - completegraph - refined ] in section [ sec : perfectparallelization ] ) .the only assumption made is that islands run elitist algorithms , and that in each generation each island has a chance of transmitting individuals from its best current fitness level to each neighboring island , independently with probability at least .we call the latter the _ transmission probability_. it can be used to model various stochastic effects such as disruptive variation operators , the impact of selection operators , probabilistic migration , probabilistic emigration and immigration policies , and transient faults in the network .this renders our method widely applicable to a broad range of settings .our estimates of parallel running times from theorems [ the : method - ring ] , [ the : method - torus ] , [ the : method - hypercube ] , [ the : method - completegraph ] , and [ the : method - completegraph - refined ] are summarized in the following theorem , hence characterizing our main results . throughout this work always denotes the number of islands . [the : generalbounds ] consider an island model with islands where each island runs an elitist ea . for each islandlet there be a fitness - based partition such that for all all points in have a strictly worse fitness than all points in , and contains all global optima .we say that an island is in if the best search point on the island is in .let be a lower bound for the probability that in one generation a fixed island in finds a search point in .further assume that for each edge in the migration topology in every iteration there is a probability of at least that the following holds , independently from other edges and for all .if the source island is in then after the generation the target island is in .then the expected parallel running time of the island model is bounded by 1 . for every ring graph or any other strongly connected there is a directed path from to and vice versa . ] topology , 2 . for every undirected grid or torus graph with side lengths at least , 3 . for the -dimensional hypercube graph , 4 . for the complete topology , as well as .a remarkable feature of our method is that it can automatically transfer upper bounds for panmictic eas to parallel versions thereof .the only requirement is that bounds on panmictic eas have been derived using the fitness - level method , and that the partition and the probabilities for improvements used therein are known .then the expected parallel time of the corresponding island model can be estimated for all mentioned topologies simply by plugging the into theorem [ the : generalbounds ] .fortunately , many published runtime analyses use the fitness - level method either explicitly or implicitly and the mentioned details are often stated or easy to derive .hence even researchers with limited expertise in runtime analysis can easily reuse previous analyses to study parallel eas .further note that we can easily determine which choice of , the number of islands , will give an upper bound of order best upper bound we can hope for , using the fitness - level method . in all bounds from theorem [ the : generalbounds ]we have a first term that varies with the topology and , and a second term that is always .the first term reflects how quickly information about good fitness levels is spread throughout the island model .choosing such that the second term becomes asymptotically as large as the first one , or larger , we get an upper bound of . for settings where is an asymptotically tight upper bound for a single island , this corresponds to an asymptotic linear speedup .the maximum feasible value for depends on the problem , the topology and the transmission probability ..asymptotic bounds on expected parallel ( , number of generations ) and sequential ( , number of function evaluations ) running times and expected communication efforts ( , total number of migrated individuals ) for various -bit functions and island models with islands running the and using migration probability .the number of islands was always chosen to give the best possible upper bound on the parallel running time , while not increasing the upper bound on the sequential running time by more than a constant factor . for unimodal functions denotes the number of function values . see for bounds for the . results for were restricted to for simplicity . all upper bounds for and stated here are asymptotically tight , as follows from general results in . [ cols="<,<,<,<,<,<,<",options="header " , ]
|
we present a new method for analyzing the running time of parallel evolutionary algorithms with spatially structured populations . based on the fitness - level method , it yields upper bounds on the expected parallel running time . this allows to rigorously estimate the speedup gained by parallelization . tailored results are given for common migration topologies : ring graphs , torus graphs , hypercubes , and the complete graph . example applications for pseudo - boolean optimization show that our method is easy to apply and that it gives powerful results . in our examples the possible speedup increases with the density of the topology . surprisingly , even sparse topologies like ring graphs lead to a significant speedup for many functions while not increasing the total number of function evaluations by more than a constant factor . we also identify which number of processors yield asymptotically optimal speedups , thus giving hints on how to parametrize parallel evolutionary algorithms . parallel evolutionary algorithms , runtime analysis , island model , spatial structures
|
often , as a result of experimental limitations , only one dimensional data is available for chaotic physical systems which have higher dimensionality .the dripping faucet is one such chaotic system .another such system is the rayleigh - benard convective system , which was experimentally realised by castaing et al . + the technique of state space reconstruction is used widely in analysis of time series data .it finds applications , for example ; in analysis of the time series obtained from multi - filamentation in optical beams , fiber solitons and ocean rogue waves .it was concluded that predictability of rogue wave phenomenon in oceans is feasible in a interval of , where tau is the delay time determined using linear auto correlation .it is therefore , important to determine the delay time accurately .it also finds applications in analysis of chaotic data from rainfall and other climatic systems .time delay techniques are often used in analysis of financial time series and stock trends . therefore , accurately understanding phase space dynamics is of paramount importance in characterizing , predicting and eventually controlling chaos .yet another system of particular interest is the system in , which yields a fifteen dimensional attractor having an intrinsic time delay of governed by delay differential equations used to predict the system time series upto several delay periods .handling chaotic experimental data has always posed a challenge .grassberger and proccacia defined the correlation dimension , as a scalable alternative to capacity and information dimension , for finite data sets . + chaotic systems such as the lorenz system and the mackey - glass system yield solutions that lie on well characterised and multi - dimensional strange attractors .the shape of these attractors and their dimensionality has also been well characterised . among the differences between the lorenz system and the mackey - glass systemis that the latter has a well defined time - delay parameter in its governing nonlinear delay differential equation whereas the former is governed by a set of three nonlinear coupled ordinary differential equations without any explicit time delay .in this letter , we present a method to reconstruct the multi - dimensional state space and strange attractor of a chaotic system using a one - dimensional time series arrived at from solution of the governing differential equations without a priori knowledge of any implicit time delay .we address the problem of accurately determining time - delay and embedding dimension for state space reconstruction of high dimensional chaotic systems using one - dimensional system data .+ the first step in this direction is the whitney embedding theorem which states that a map from an n - manifold to a dimensional euclidean space is an embedding .subsequently , takens showed that an n - manifold can be recovered from a single measured quantity .it was shown that time delayed versions of the measured quantity [ s(t),s() ... s( ) ] would embed the n - manifold .however , data from physical systems do not indicate a natural choice for the delay coordinate and embedding dimension .figure [ fig:1 ] illustrates that choice of affects the reconstruction significantly .we are hence motivated to give a prescription to choose the delay time efficiently and accurately .linear auto correlation function has popularly been used to delay time .further , fraser and swinney have suggested the use of average mutual information to choose the delay coordinate .however , in our present case , we found that neither choice yielded an appealing reconstruction .we hence , are motivated to suggest a prescription of our own .we briefly discuss the choice of embedding dimension .+ a key idea that we use in this letter is that of fractal dimension .we hence make some elementary definitions of importance .we first denote a open ball of radius centred at the point , by .we then let denote the natural measure associated with set s. the point wise fractal dimension may be defined as ; the remarkable feature of the point wise dimension is that it is independent of the point that is chosen .a heuristic argument for this may be found in ott .a more comprehensive review of fractal dimensions may be found in farmer et al s work . + we test our methodology on the well chracterized lorenz and mackey - glass systems .the lorenz system is given by the mackey - glass system is given by ; where represents the value of at time .the values of , , and were chosen to be 2,1,10 and 1500 respectively ..24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] + .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] .24 shows the original lorenz attractor [ fig:1b]-[fig:1h ] show the reconstructions of the lorenz attractor from a time series for successively larger values of delay coordinate .the time series was generated by using the x coordinates of successive points .the numerical solution of the lorenz equation was obtained using the euler method from the parameters , and , with initial condition .the high degree of similarity between fig:1a and fig:1e vindicates the method used for reconstruction , title="fig : " ] + we now propose a new prescription for the choice of delay coordinate in a reconstruction .however , we would first require to make a guess for the embedding dimension .but , in principle , once is determined by the prescriptions suggested later , the process may be repeated .we let s(i ) denote the entry of the time series .+ then we write the euclidean distances as : ^ 2}\ ] ] + the measure is ; where is the unit step function , n the total number of state space points obtained , and , an arbitrarily small number .it is then easy to see that the following equation for would correspond to the expression for the pointwise dimension taken centred about point " we next define the dimension deviation function , f as , where is the value of the pointwise dimension averaged over all points . we now claim that the minima of is a good choice for .+ we motivate this claim , with the following argument .we first notice that is the measure associated with an open ball or radius centred around a point in phase space , labelled , reconstructed with delay time . from equation [ dp ] , it follows that is the pointwise dimension of the attractor .hence is the standard deviation in the pointwise dimension .should the reconstruction be a one that recovers most of the attracting set dynamics , we would expect to obtain zero standard deviation ( since the pointwise dimension is invariant with respect to the point chosen , for an attracting set ) .hence a minima in the standard deviation would definitely occur if the attractor is fully recovered , since the standard deviation is necessarily a positive quantity .+ + we may further argue that only if there exists a set of points where the measure remains invariant and positive and another disjoint set with another value of the measure would we see non zero or relatively larger values of standard deviation in the pointwise dimension .however the measure on the set would not be ergodic , and hence can not correspond to an attracting set of a smooth map . an ergodic measure not be decomposed into two measures , and , such that ; where p is any real which lies in the interval ( 0,1 ) .+ it may however prove tricky to actually compute the pointwise dimension from a finite quantity of data , since no finite amount of data can give an accurate estimate of measure .it is therefore recommended to use very small values of in equation [ dp ] , but allow only those values that contain at least two points within the open ball , to avoid outliers. a regression fit of against ought to give a reliable estimate of the the pointwise dimension .further , computations can be cut down by choosing to use a large representative set of points , as the centers for the computation of the pointwise dimension , rather than the entire data set .+ in the case of the lorenz attractor , we observe that the fist local minima of the dimension deviation function obtained at gives the best reconstruction observed visually .fig [ fig:2a ] shows the dimension deviation as a function of = 1805 , approximately the same as the best reconstruction.time is in normalised units , scaledwidth=50.0% ] = 1600 .the delay coordinate chosen in the underlying delay differential equation is .time is in normalised units.,scaledwidth=50.0% ] figure [ fig:2b ] shows dimension deviation function for the mackey - glass attractor .the first minima was obtained at units , while the delay used in the underlying attractor was units .this high degree of accuracy indicates that the first local minima of the dimension deviation is indeed a good choice for the delay coordinate .some other methods for determining the optimal delay have been proposed in literature .the first method uses the linear auto correlation function .the linear auto correlation is defined by the following relationship .[s(m)-\bar{s}]}{\frac{1}{n}[s(m)-\bar{s}]^2}\ ] ] where is the average value of the time series .we immediately remark that this definition follows from finding the best fit function for the linear relationship , \ ] ] the prescription often used for the choice of is the first zero of the auto - correlation function defined above .it is easy to see that , the linear auto - correlation function , may yield a bad choice for for nonlinear systems , since minimising the linear dependance of terms separated by a time - span of , does not necessarily minimise the over all dependance that arises from the non linear terms .further , minimising the dependance of terms separated by may not be the best strategy , since we are looking for an intermediate such that terms separated by a distance of are neither statistically independent , nor nearly overlapping .+ in the study of the lorenz attractor , we found that that the auto correlation had its first zero , far from the point where the best visual reconstruction was found .this demonstrates the failure of this prescription for nonlinear systems .yet another prescription , used often is the mutual information function .it is a generalization of the linear auto - correlation function , and relates the information content in one set to the information content in another .it was first proposed by gallaghar . in the context of timeseries analysis , we measure the mutual information content , of terms separated by a distance .we define mutual information between terms separated by a distance to be ; (\frac{p[n ] p[n+\tau]}{p[n , n+\tau]})\ ] ] where ] represents the joint probability of their occurrence .the prescription often suggested is the use of the first minima of the average mutual information as an appropriate choice of .however , since also measures the information between two terms separated , by , we expect that its first minima is close to the zero of the linear auto - correlation function . in determined by the relative frequency of the occurrence of the value and is determined likewise . is determined by the relative frequency of occurrence of the the pair of numbers and , separated by exactly a time - span of .+ in the present study of the lorenz attractor , the plot of the average mutual information against yielded a minima that was far from the delay time used in the best visual reconstruction . however , it was closer to the optimal value of delay time , as compared to that predicted by the linear auto - correlation .hence , this method too yields a value that is far off in the present case .while there exist many methods that one may use to determine the optimal value of the embedding dimension , we suggest one that is along the lines of the method of false nearest neighbours listed in abarbanel et al s work .we rewrite equation [ dp1 ] , however now making it a function of , keeping fixed . here indicates the total number of nearest neighbours .we now observe that at a low dimension the number of false neighbours at every point would be higher .however , when embedded in any dimensionality higher than the optimal embedding dimension , the number of neighbours would remain nearly the same . hence looking for the point of saturation of the total number of neighbours , against the embedding dimension ,would give us a good estimate of the optimal embedding dimension .in the present case , for the lorenz attractor , using the plot of embedding dimension against the number of neighbours it was found to saturate at a embedding dimension value of 5 .to summarise , we have tested the methods for delay coordinate choice given in literature and found that they have not succeeded in our case study of the lorenz and mackey - glass systems .we further developed an alternative prescription for the choice of delay coordinate , modelled after the deviation in the pointwise dimension .it worked significantly better for our particular case .+ we then used the same definition to write down a prescription for the choice in embedding dimension as well .the major shortcoming of the proposed methodology lies in an arbitrary initial choice in embedding dimension that has to be made to accurately determine the delay time . to circumvent this, an arbitrary and high choice of the embedding dimension can be made to determine the optimal delay , then determine the optimal embedding dimension and further redo the calculation for in the new embedding dimension . + our method has a time complexity of , while mutual information , has a algorithm with time complexity .however , one may average over a representative set of points rather than the whole set and obtain the standard deviation to reduce computation time by any desirable factor .99 r. s. shaw,_the dripping faucet as a model chaotic system _ , aerial press , santa cruz ( 1984 ) .p. martien , s.c .pope , p.l .scott , r.s .shaw , phys .lett . a , * 110 * , 399 ( 1985 ) .h. bnard , rev .pure appl . * 11 * , 1261 ( 1900 ) .lord rayleigh , phil . mag . * 32 * , 529 ( 1916 ) .b. castaing , g. gunarante , f. heslot , l. kadanoff , a. libchaber , s. thomae , x. z. wu , s. zaleski , g. zanetti , j. fluid mech , * 204 * , 1 ( 1989 ) .s. birkholz , c. bre , a. demircan and g. steinmeyer , phys .lett . * 114 * , 213901 ( 2015 ) .a. w. jayawardena and f. lai , j. hydrol . * 153 * , 23 ( 1994 ) .e. w. saad , d. v. prokhorov , and d. c. wunsch , ieee trans .neural netw . * 9*,1456 ( 1998 ) .a. b. cohen , b. ravoori , t. e. murphy and r. roy phys .* 101 * , 15 ( 2008 ) .p. grassberger and i. procaccia , phys .lett . * 50 * , 5 ( 1983 ) .e. n. lorenz , j.atmos sci .* 20 * , 130 ( 1963 ) .m. c. mackey and l. glass , science , * 197 * , 287 ( 1977 ) .a. wolf , j. b. swift , h. l. swinney and j. vastano , physica(amsterdam ) * 16d * , 285 , ( 1985 ) .h. whitney , ann . math . * 37 * , 645 ( 1936 ) .f. takens , lecture notes in mathematics , * 898*(springer , berlin ) , 366 ( 1981 ) .a. m. fraser , h. l. swinney , phys . rev . a. * 33 * , 1134 ( 1986 ) .e. ott,_chaos in dynamical systems _ , cambridge university press ( 2002 ) .j. d. farmer , e. ott , j. a. yorke , physica * 7d * , 153 ( 1983 ) .h. d. i. abarbanel , r .brown , j. j. sidorowich , and lev .tsimring , rev .phys . * 65 * , 4 ( 1993 ) .r. gallagher , ieee trans .theory , * 11 * , 13 ( 1965 ) .m. b. kennel , r. brown , and h. d. i. abarbanel , phys.rev .a * 45 * , 3403 ( 1992 ) .d. s. broomhead , and r. jones , proc .london a , * 423 * , 103 ( 1989 ) .d. t. kaplan , and l. glass , phys .lett . * 68 * , 427 ( 1992 ) .
|
a new and accurate method to determine the time delay and embedding dimension for state space reconstruction of a high dimensional system from a scalar time series using time delay embedding is presented . the time delay is obtained to unprecedented accuracy by evaluating the minima of a newly defined dimension deviation function . the efficacy of our method is tested by applying it to the lorenz system and the mackey - glass system . a good agreement is obtained between the shape and embedding dimension of the physical system attractor(s ) and the corresponding reconstruction(s ) for both the systems studied . this , along with a heuristic argument provide a validation of the proposed method .
|
one of the standard predictions of nonlinear hierarchical structure formation models is the abundance of virialized structures .simulations show that this abundance depends on the large scale environment : the ratio of massive to low mass objects is larger in dense regions ( e.g. , * ? ? ?recent measurements in galaxy surveys appear to bear this out : the virial radii of objects in underdense regions are smaller , consistent with their having smaller masses ( e.g. , * ? ? ?this paper is motivated by the fact that there are currently in the literature three methods for estimating how the mass function of virialized halos depends on the environment which surrounds them .the first , and perhaps easist to implement , is based on the excursion set approach .the second argues that halos which form in , say , voids should be thought of as forming in a less dense background cosmology , so the mass function is that in a universe with ( e.g. , * ? ? ?the third is similar , but notes that to correctly estimate the background cosmology , one must account not only for the lower density in a void , but for the fact the effective hubble constant of the void cosmology is larger than in the background ( e.g. , * ? ? ?one way of thinking about the effective hubble constant is that it ensures that the effective cosmology has the same age as the background cosmology .( the cosmological constant is , of course , constant , but when expressed in units of the critical density in the effective model , it is modified because the critical density depends on the effective hubble constant . ) in section [ equivalence ] , we use the spherical evolution model to show that the first and third methods are equivalent ( although * ? ? ?* state otherwise ) , and that both are incompatible with the second method ( which incorrectly ignored the change to the hubble constant ) .there has been recent interest in the fact that the formation histories of halos of fixed mass depend on their environment , an effect which is not predicted by the simplest excursion set methods ( e.g. , * ? ? ?so one might have wondered if this is where the difference between the excursion set approach and one based on the effective cosmology is manifest . in section [ evolution ]we show that in this case also , the two approaches are equivalent .a final section summarizes our results , and speculates that the equivalence we have shown will not survive in models models where the force law has been modified from an inverse square .the main point of the following calculation is to show explicitly that , at least for cosmologies with no cosmological constant , the environmental dependence of halo abundances can be described using the excursion set approach ( e.g. , * ? ? ?* ; * ? ? ?namely , one need not worry about the details of the effective cosmology associated with the region surrounding the perturbation ( as do * ? ? ?* ) ; it is enough to compute an effective growth factor using the spherical collapse model .although we have phrased our discussion in terms of an background cosmology , it is obviously applicable to arbitrary values of .our analysis suggests that this remains true when the background cosmology has . for what follows , it is useful to recall that the age - redshift relation in an cosmology is given by , where is the hubble constant . in an open universe ,this relation is where with the convention that , so , and .the linear theory growth factor is if , and if then \right)\ ] ] where .the spherical evolution model describes the evolution of the size of a spherical region in an expanding universe : it provides a parametric relation between the density contrast predicted by linear theory , the nonlinear overdensity , and the infall speeds . here is the linear theory growth factor at time , and we will often use the shorthand , . if , then where where the first expression in each pair is for initially overdense perturbations and the second is for underdense ones .overdense perturbations eventually collapse , the final collapse being associated with the value , at which time the linear theory density is . in this section ,we use the subscript 1 to indicate that this value is associated with .if , then only perturbations above some density will collapse , and where is the hubble constant , and complete collapse is again associated with , and we will write the critical linear density required for collapse as .\ ] ] it happens that depends only weakly on .when , then , and when .the parametric solution is rather cumbersome .it happens that the relation between and is rather well approximated by similarly , it is also useful to have an approximation to the exact solution for the linear theory growth factor .when , then the linear theory growth factor is well approximated by , where denotes the expansion factor at time , and is given by equation ( [ omegat ] ) .this expression is normalized so that if .suppose we consider the evolution of a spherical underdense region in an universe .let denote the density in this region .if we wish to think of this region as being an underdense universe , then the effective value of in this region is smaller than unity for two reasons : first , because the density is lower by a factor of , and second because the region is expanding faster than the background , so it has an effective hubble constant which is larger . to see what equation ( [ scmodel ] ) implies for the evolution ,let denote the density of a small patch respect to the background density ( the subscript unity denotes the fact that this is the overdensity with respect to a background which has critical density : ) .now , suppose that this patch is surrounded by a region within which the average density is with respect to the true background .then the smaller patch has overdensity with respect to its local background . if we wish to describe the local environment as has having its own effective cosmological parameters, then the local value of the hubble constant differs from the global one : .thus , the expressions in equation ( [ scmodel ] ) are really the statements that where is the peculiar velocity of the shell with respect to the background , had the mass within been smoothly distributed ( we know it is not because the central region has density ) .now , the local value of within differs from the global value because and because the different expansion rate means that the local value of the critical density is different : where we have used the fact that .notice that this relation between and is the same as equation ( [ uomega ] ) .in other words , we get the same description for the evolution of the small scale patch if we treat it as having overdensity with respect to the background within which the hubble constant is , as if we describe it with respect to the local cosmological model and , and we rescale our definitions of density and peculiar velocity accordingly .in addition , using the exact expression for the age of the universe given above , we can see that these definitions also guarantee that is the same in the both the background and the local cosmological model .if we write the linear theory overdensity associated with as then \nonumber = { \delta_{\rm c\omega}\over \delta_{\rm c1 } - \delta_\omega } \bigl[h(\theta ) - \delta_\omega\bigr].\ ] ] the term in square brackets is simply the difference in linear theory values for the background cosmology .if we think of this as an effective linear theory overdensity in the effective cosmology , then the prefactor is the effective linear theory growth factor .it is straightforward to verify that , indeed , where is the growth factor in the background cosmology , and is the growth factor in the patch , _ at time . this last point is important , as the expansion factor in the patch cosmology is not equal to the expansion factor in the background cosmology . in particular , we know that . for completeness , we note that ( recall that we are in an underdense region ) . in the following ,take , so .the approximate solution ( [ approx ] ) of the spherical evolution model shows similar behaviour : where denotes the linear theory value associated with the nonlinear density for .the final approximation follows from recalling that depends only weakly on cosmology .comparison with equation ( [ approx ] ) shows explicitly that the relevant linear theory quantity is the difference between the values for the perturbation and the environment , and this difference must be multiplied by the linear growth factor in the effective cosmology .now , to estimate the mass function of virialized objects , we are interested in the case when .the analysis above shows that ; the objects which form in a region of nonlinear density with respect to the background , with corresponding linear overdensity , can either be thought of as forming in an effective cosmology ( e.g. , * ? ? ?* ) , or as forming in the true background cosmology but with an effective linear theory overdensity which is offset by to account for the surrounding overdensity ( e.g. , * ? ? ?* ; * ? ? ?the second description is easier to implement , and follows naturally from the excursion set description . in particular, the analysis above shows that approaches which do not correctly compute ( e.g. , gottlber et al .2003 ignore the fact that ) are incompatible with the excursion set approach . in any case , the analysis above suggests that such approaches are ill - motivated .the previous section showed that the excursion set approach results in the same expressions for the environmental dependence of the present day linear theory growth factor as one derives from thinking of the environment as defining an effective cosmology .so the question arises as to whether or not the two approaches predict the same evolution .for example , one might have wondered if the formation histories of objects are the same in these two approaches . to see that they are, it will be convenient to modify our notation slightly .we showed that where the subscripts 0 mean the present time .the quantity is what we previously called ; it is the value of the initial overdensity extrapolated using linear theory ( of the background cosmology ) to the time at which the nonlinear density is . also , we previously had set the growth factor in the background universe at the present time to unity : .we have written it explicitly here to show that , had we chosen to perform the calculation for some earlier time , then we would have found where the subscript 1 denotes the earlier time .i.e. , is the effective cosmology associated with the overdensity , which itself is related to by the spherical evolution model ( the region that is today was a different volume in the past , but its mass was the same . ) and , analogously to the previous expression , is the initial overdensity extrapolated using linear theory to the ( earlier ) time at which the nonlinear density was .since is closer to 0 than is , is also closer to 0 than is .if one were to apply the excursion set approach to study formation histories in the effective cosmology , one would be interested in the difference between equations ( [ highz ] ) and ( [ lowz ] ) : .\end{aligned}\ ] ] now , the quantity in square brackets is \ ,d_0^{-1 } = 0,\ ] ] because and are the same quantity ( the initial overdensity ) , evolved using linear theory to two different times .in particular , is closer to 0 than is by .thus , \ , d_0^{-1 } .\ ] ] note that the expression on the right has _ no _ dependence on the effective cosmology .moreover , it is exactly the same as the expression that one obtains when using the excursion set approach to study formation histories in the background cosmology .it is in this sense that the formation histories of objects are independent of the effective cosmology of the environment ; the excursion set approach is a simple self - consistent way of exploiting this fact .the excursion set description provides a simple , self - consistent way of estimating the effect of environment on structure formation and evolution . in particular , it is equivalent to using the fact that the large scale environment can be thought of as providing an effective background cosmology of the same age ( section [ equivalence ] ) . estimating the parameters of the effective cosmology is slightly more involved , but useful for running simulations which mimic the formation of structure in different environments .in essence , the equivalence between the excursion set and effective cosmology descriptions is a consequence of birkhoff s theorem : the evolution of a perturbation does not depend on its surroundings .there has been recent interest in models with modified gravitational force laws ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?since birkhoff s theorem does not apply in such models , it will be interesting to see if this equivalence survives . if not , the enviromental dependence of clustering may be added as another constraint on such models .rks thanks bepi tormen for asking about this equivalence on more than one occasion , and the participants of the meeting on cosmological voids held in december 2006 at the royal netherlands academy of arts and sciences .we also thank e. neistein for insisting that the excursion set and effective cosmology approaches could not be reconciled with one another .99 abbas u. , sheth r. k. , 2007 , mnras , 378 , 641 bernardeau f. , colombi s. , gaztanaga e. , scoccimarro r. , 2002 , phys . rep ., 367 , 1 capozziello , s. , stabile , a. , & troisi , a. 2007 , , 76 , 104019 carroll s. m. , press w. h. , turner e. l. , 1992 , araa , 30 , 499 clifton , t. 2006 , classical and quantum gravity , 23 , 7445 dai , d .- c . , maor , i. , & starkman , g. 2008 , , 77 , 064016 frenk c. s. , white s. d. m. , davis m. , efstathiou g. , 1988 , apj , 327 , 507 gao l. , springel v. , white s. d. m. , 2005 , mnras , 363 , 66 goldberg d. m. , vogeley m. s. , 2004 , apj , 605 , 1 gottlber s. , lokas e. l. , klypin a. , hoffman y. , 2003 , mnras , 344 , 715 gunn j. e. , gott j. r. , 1972 , apj , 176 , 1 jenkins a. , frenk c. s. , white s. d. m. , colberg j. m. , cole s. , evrard a. e. , couchman h. m. p. , yoshida n. , 2001 , mnras , 321 , 372 martino m. c. , stabenau f. , sheth r. k. , 2008 , mnras , submitted mo h.j ., white s.d.m . , 1996 ,mnras , 282 , 347 padmanabhan t. , 1993 , structure formation in the universe , cambridge university press , cambridge , uk peebles p. j. e. , 1980 , the large - scale structure of the universe .princeton univ . press , princeton , nj schfer , b. m. , & koyama , k. 2008 , , 385 , 411 press w. h. , schechter p. l. , 1974 , apj , 187 , 425 schechter p. l. , 1980 , aj , 85 , 801 sheth r. k. , tormen g. , 1999 , mnras , 308 , 119 sheth r. k. , tormen g. , 2002 , mnras , 329 , 61 sheth r. k. , tormen g. , 2004 , mnras , 350 , 1385 shirata a. , shiromizu t. , yoshida n. , suto y. , 2005 , prd , 71 , 064030 shirata a. , suto y. , hikage c. , shiromizu t. , yoshida n. , 2007 , prd , 76 , 044026 stabenau h. f. , jain b. , 2006 , prd , 74 , 084007 white s. d. m. , 1996 , in cosmology and large scale structure , proceedings of the `` les houches ecole dete de physique theorique '' ( les houches summer school ) , session lx . edited by r. schaeffer , j. silk , m. spiro and j. zinn - justin .
|
in studies of the environmental dependence of structure formation , the large scale environment is often thought of as providing an effective background cosmology : e.g. the formation of structure in voids is expected to be just like that in a less dense universe with appropriately modified hubble and cosmological constants . however , in the excursion set description of structure formation which is commonly used to model this effect , no explicit mention is made of the effective cosmology . rather , this approach uses the spherical evolution model to compute an effective linear theory growth factor , which is then used to predict the growth and evolution of nonlinear structures . we show that these approaches are , in fact , equivalent : a consequence of birkhoff s theorem . we speculate that this equivalence will not survive in models where the gravitational force law is modified from an inverse square , potentially making the environmental dependence of clustering a good test of such models . [ firstpage ] methods : analytical - dark matter - large scale structure of the universe
|
in many of today s signal processing systems there is a need for random signal sampling .the idea of random signal sampling dates back to early years of the study on signal processing .signal reconstruction methods for this kind of sampling were studied , there are practical implementations of signal acquisition systems which employ random nonuniform sampling .recently , this method of sampling has received more attention hence to a relatively new field of signal acquisition known as compressed sensing .it was shown that in many compressed sensing applications the random sampling is a correct choice for signal acquisition .the random sampling gives a possibility to sample below nyquist rate , which lowers the power dissipation and reduces the number of samples to be processed .a process of random sampling is defined by a sampling pattern , which indicates signal sampling points in time .generation and analysis of random sampling patterns which are dedicated to be implemented in analog - to - digital converters is a subject of this work . in practice , sampling according to a given sampling pattern is realized with analog - to - digital converters .currently , there are available event - driven analog - to - digital converters , which are able to realize random sampling .these converters have certain practical constraints coming from implementation issues , which consequently puts implementation - related constraints on sampling patterns .these constraints concern minimum and maximum time intervals between adjacent sampling points , e.g. wakin et .al . in their work used a random nonuniform sampling pattern with minimum and maximum intervals between adjacent sampling points .furthermore , there are application - related constraints which concern stable average sampling frequency of sampling patterns , equal probability of occurrence of possible sampling points , and uniqueness of generated patterns . the problem which this work solves is composed of two parts .firstly , how to evaluate different sampling pattern generators with emphasis on practical applications ?the early work on estimation of random nonuniform sampling patterns was done by marvasti . looked for a sampling pattern with the best ( most equal ) histogram of inter - sample spacing .gilbert et .al proposed to choose a random sampling pattern based on permutations . to the best of our knowledge , there is no scientific work published which concerns multiparameter statistical analysis of random sampling patterns . due tothe constantly increasing available computational power it has become possible to analyze random pattern generators statistically within a reasonable time frame .statistical parameters which assess random sampling pattern generators with respect to the constraints described above are described in this paper .the second problem discussed in this paper is how to construct a random sampling pattern generator which generates patterns with a given number of sampling points , and given intervals between sampling points , with possibly minimum loss in randomness ?the well known random sampling pattern generators are additive random sampling ( ars ) and jittered sampling ( js ) .however , these sampling pattern generators do not take into account the mentioned implementation constraints , which is an obstacle in practical applications .there have been some attempts to generate more practical sampling patterns .lin and vaidyanathan discussed periodically nonuniform sampling patterns which are generated by employing two uniform patterns .bilinskis et al . in a concept of correlated additive random sampling , which is a modification of the ars .papenfu et al . in proposed another modification of the ars process , which was supposed to optimally utilize the adc .ben - romdhane et al . discussed a hardware implementation of a nonuniform pseudorandom clock generator . unfortunately , none of the proposed sampling pattern generators are designed to address all the implementation constrains .this paper proposes a sampling pattern generator which is able to produce constrained random sampling patterns dedicated for use in practical acquisition systems .the generator is compared with existing solutions using the proposed statistical parameters .implementation issues of this generator are discussed .the paper is organized as follows .the problem of random sampling patterns generation is identified in section [ sec : problem ] .statistical parameters for random pattern generators are proposed in section [ sec : parameters ] .a new random sampling pattern generator for patterns to be used in practical applications is proposed in section [ sec : generators ] .the proposed generator is compared with existing generators in section [ sec : taa ] .some of the implementation issues of random sampling patterns are discussed in section [ sec : implem ] .conclusions close the paper in section [ sec : conclusions ] .the paper follows the reproducible research paradigm , therefore all of the code associated with the paper is available online .this paper focuses on generation and analysis of random sampling patterns .the purpose of this section is to formally define a sampling pattern and its parameters , and to discuss requirements for sampling patterns and sampling pattern generators .a sampling pattern is an ordered set ( sequence ) with fixed sampling time points : where the sampling time points are real numbers ( ) .elements of such a set must increase monotonically : time length of a sampling pattern is equal to the time length of a signal or a signal segment on which the sampling pattern is applied .the time length may be higher than the last time point in a pattern : .any sampling point is a multiple of a sampling grid period : where is the set of natural numbers without zero .the sampling grid is a set : where is the number of sampling grid points in a sampling pattern , and signifies the floor function , which returns the largest integer lower or equal to the function argument .it can be stated that a pattern is a subset of a grid set ( . the sampling grid period describes the resolution of the sampling process . in practice ,the lowest possible sampling grid depends on the performance of the used adc , its control circuitry , and the clock jitter conditions .a sampling pattern may be represented as indices of sampling grid : let us define a set which contains intervals between the sampling points : if all the intervals are equal ( ) , then is a uniform sampling pattern with a sampling period equal to .if the time intervals are chosen randomly , then is a random sampling pattern .a random sampling pattern is applied to a signal of length : = s(t_{k } ) , \quad t_{k } \in \mathbb{t}\ ] ] where is a vector of observed signal samples .the average sampling frequency of a random sampling pattern depends on the number of sampling time points in the pattern : an example of a random sampling pattern is shown in fig .[ fig : pattern_unconstrained ] .let us denote a nontrivial problem of generation of a multiset ( bag ) with random sampling patterns .the time length of sampling patterns is , grid period is .the requested average sampling frequency of patterns is , minimum and maximum intervals between sampling points are and respectively .the problem is solved by random sampling pattern generators .the generators should meet requirements given in [ subsec : generators_rec ] , and all the produced sampling patterns must meet the requirements given below in [ subsec : patterns_rec ] . a random sampling pattern generator must produce sampling patterns with a requested average sampling frequency . if the average sampling frequency is lower than the requested sampling frequency , then the quality of signal reconstruction may be compromised . on the contrary , higher sampling frequency than the requested causes unnecessary power consumption .a requirement for minimum interval between sampling points comes from the adc technological constraints .violation of this requirement may render the sampling pattern impossible to implement with a given adc .similarly , there may be a requirement of maximum interval between samples .generating an adequate random sampling pattern is realizable if and , where is the requested average sampling period .as stated in ( [ eq : mono ] ) , sampling points in a given sampling pattern can not be repeated .repeated sampling points do not make practical sense since a signal can be sampled only once in a given time moment .if a sampling pattern contains repeated sampling points , then a dedicated routine must remove these repeated points . as described in [ subsec : patterns ] , a sampling pattern is an ordered set which is a subset of a grid . in other words , sampling points are drawn from a pool of grid points .the sampling pattern generator should not favor any of the sampling grid points . ideally , all of the sampling points should be equi - probable . repeated sampling patterns generate unnecessary processing overhead , especially if sampling patterns are generated offline and further processed ( fig . [fig : gen_case2 ] ) .an additional search routine which removes replicas of sampling patterns must be implemented in this case .therefore , the ideal random sampling pattern generator should not repeat sampling patterns unless all the possible sampling patterns have been generated .in this section we propose statistical parameters for evaluation of a tested random sampling pattern generator .aim of these parameters is to assess how well sampling patterns produced by the evaluated generator cope with the requirements described in [ subsec : patterns_rec ] and [ subsec : generators_rec ] .these parameters are to be computed for a bag of patterns produced by the evaluated generator , the parameters are computed using the monte carlo method .it is checked if every generated sampling pattern fulfills requirements given in [ subsec : patterns_rec ] and if a generated bag ( multiset ) of sampling patterns fulfill requirements given in the [ subsec : generators_rec ] . according to our best knowledge ,similar statistical evaluation has never been introduced before .let us introduce a statistical parameter indicating how well the evaluated generator fulfills the imposed requirement of the requested average sampling frequency ( [ subsec : patterns ] ) : where is the average sampling frequency of the -th sampling pattern .since all the sampling patterns have the same time length , in practice it is usually more convenient to use the requested number of sampling points in a pattern and count the number of actual sampling points in a pattern .this parameter is an average value of a relative frequency error of every sampling pattern .the lower the parameter is , the better is the frequency stability of the generator .additionally , let us introduce a parameter : which is the ratio of patterns in a bag which violate the frequency stability requirement .the parameter denotes whether the average sampling frequency of the -th pattern is incorrect .let us introduce statistical parameters which indicate how well the assessed generator meets the interval requirements discussed in sec .[ subsec : patterns_rec_dist ] . fora given -th sampling pattern let us create ordered subsets and , where is a set with intervals between sampling points as in ( [ eq : distset ] ) .these subsets contain intervals between samples which violate the minimum and the maximum requirements between sampling points and respectively : now let us introduce statistical parameters and : where denotes the number of elements in a set ( set s cardinality ) , and as in ( [ eq : distset ] ) .these parameters contain the average squared ratio of the number of intervals in a pattern which violate minimum / maximum interval requirements to the number of all intervals between sampling points in a pattern .the lower the above parameters are , the better the evaluated generator meets interval requirements .similarly to the frequency stability parameter , let us introduce and parameters : which are additional parameters which are equal to ratios of patterns which violate minimum or maximum intervals between sampling patterns .parameters and denote if the -th pattern meets the requirement of minimum and maximum intervals respectively .it is possible to assign to every -th pattern a parameter which denotes if a pattern violates the frequency stability ( [ subsec : patterns_rec_freq ] ) or the interval requirements ( [ subsec : patterns_rec_dist ] ) .the ratio of incorrect patterns of a bag is : where is a logical disjunction . using parameter it is possible to generate a sub - bag which contains only correct patterns from the bag : where signifies that a pattern is an element of a multiset .please note that is a multiset , so patterns which are the elements of may be repeated , and patterns which are the elements of the multiset may also be repeated . ideally , a sub - bag with correct patterns is identical to the original bag .let us introduce a statistical parameter which indicates whether the probability density of occurrence for grid points in patterns from bag is uniformly distributed : the probability of occurrence of the -th grid point is : where is the number of sampling grid points in a sampling pattern , is the total number of sampling points in all the patterns in a bag , and the parameter indicates whether the -th grid point is used in the -th sampling pattern : additionally , let us introduce a statistical parameter which is calculated identically to , but based on sampling patterns from subbag ( [ eq : bagastar ] ) .let us create a set for a bag of sampling patterns generated by the evaluated pattern generator which contains only unique patterns from .similarly , let us create a set which contains only unique patterns from the subbag with correct patterns ( [ eq : bagastar ] ) .now let us introduce parameters and : these parameters count the number of unique patterns and unique correct patterns in the bag with generated patterns .algorithms of sampling pattern generators are presented in this section .subsection [ sec : js_ars ] presents existed , widely known jittered sampling ( js ) and additive random sampling ( ars ) algorithms .subsection [ sec : angie ] presents the proposed sampling pattern generator algorithm , which is tailored to fulfill the requirements presented in [ subsec : patterns_rec ] and [ subsec : generators_rec ] .please note that all the algorithms presented in this paper generate sampling patterns represented as indices of sampling grid points as in ( [ eq : grid_representation ] ) .jittered sampling and additive random sampling algorithms are widely used to generate random sequences .there are 4 input variables to the js and ars algorithms : requested time of a sampling pattern , grid period , requested average sampling frequency and the variance parameter .the realizable time of a sampling pattern may differ from the given requested time of a pattern if the given time is not a multiple of the given grid period . before either of the algorithms is started , the number of grid points in a sampling pattern , the realizable time of a sampling pattern and the realizable requested number of sampling points must be computed : \ ] ] where ] .the computed statistical parameters of sampling patterns are automatically tested for convergence .a mean value is accounted as converged , if for the last patterns it did not change more than 1% of the mean value computed for all the patterns currently tested .the minimum number of sampling patterns tested is .the uniqueness parameters and ( [ eq : uniq ] ) are computed after patterns .error parameters computed for the tested sampling pattern generators are plotted in fig .[ fig : errparam ] .the ratio of incorrect patterns are plotted in fig .[ fig : errratio ] .this ratio for the angie algorithm ( blue ) is equal to 0 for all the values of variance .thus , all the pattens have correct average sampling frequency and intervals between sampling points .patterns generated by the js ( green ) and the ars algorithms ( black ) are all correct for very low values of the variance , but the quality parameters and for these values are poor ( fig .[ fig : qualparam ] and fig .[ fig : uniqueparam ] ) . in fig .[ fig : errparam ] it can be seen that for nearly all the values of variance , the frequency stability of the patterns generated by the js and the ars algorithms is compromised , and for most of the values of , the requirement of minimum intervals between sampling points is not met by these algorithms .the best values of the parameter are achieved for the js ( green ) and the ars ( black ) algorithms ( fig .[ fig : qualparam ] ) , but only if all the patterns ( also incorrect ) are taken into account ( parameter ) . if the quality parameter was computed only for the correct patterns ( parameter ) , it can be clearly seen that the proposed algorithm ( blue ) performs significantly better than the js ( yellow ) and the ars algorithms ( yellow ) .furthermore , the best values of are found for the values of variance for which most of the patterns produced by the js and the ars algorithms are incorrect .plots of the best probability density functions found for the tested algorithms are in fig .[ fig : plotpdf ] . fig .[ fig : uniqueparam ] shows the number of unique patterns produced by the tested algorithms .the number of unique correct patterns produced by the proposed algorithm is higher than the number produced by the js and the ars algorithms for any variance value .the above results show that the proposed algorithm angie performs better than the js and the ars algorithms .all the patterns generated by the angie algorithm are correct , have a parameter defined as in ( [ eq : generalerror ] ) equal to 0 .the quality parameters described in sec .[ sec : qpdf ] and sec .[ sec : qu ] are better for the proposed algorithm .it can be seen that the variance value , which is an internal algorithm parameter , should be adjusted to a given problem . for the given problem, the proposed algorithm performs best for .in the second experiment four different cases ( a - d ) of sampling patterns are studied .parameters of these cases are collected in table 1 . in the first two casesthere are requirements of both the minimum and the maximum distance between sampling points . in the second casethere are only 5 sampling points requested p. sampling pattern , and the number of sampling grid points is limited to 100 . in the third casethere are no requirements imposed on distances between sampling points , so there is only the requirement of stable average sampling frequency .this case is distinctive from others , because the number of sampling points p. sampling pattern is high ( ) , and the grid period is very low . in the last casethere is a requirement of the maximum distance between sampling points . in all the four casesthe variance is logarithmically swept in the range $ ] . in this experimentthere are three quality parameters measured for all the three generators ( js , ars and angie ) .the first parameter is the ratio of incorrect patterns ( [ eq : generalerror ] ) .the second is the probability density parameter as in ( [ eq : pdfpar ] ) , but computed only for the correct patterns .the third quality parameter is the number of unique correct patterns in the first generated patterns ( [ eq : uniq ] ) .[ table : exp2 ] c || c | c | c | c | c || c | c | c | c & & + & & & & & & & & & + case & & & & & & & & & + + [ -2.2ex ] a & & & 0.05 & 10 & 30 & & 50 & 10 & 30 + b & 0.1 & 1 & 50 & 0.015 & 0.028 & 100 & 5 & 15 & 28 + c & & 1 & 10 & & & & & & + d & 0.005 & 25 & & & 14 & 2 & 500 & & 56 + results of this experiment are shown on figures [ fig : eps2corr][fig : exp2unique ] .each figure presents a measured quality parameter for all the four cases .the ratio of incorrect patterns is on fig .[ fig : eps2corr ] , the probability density parameter is on fig .[ fig : exp2ep ] , and the number of unique correct patterns is on fig .[ fig : exp2unique ] .let us take a look at the ratio of incorrect patterns ( fig .[ fig : eps2corr ] ) .the angie algorithm generates only correct sampling patterns .hence to line 10 in the algorithm ( see algorithm 2 ) , the minimum and the maximum distances between sampling points are kept .lines 68 in the angie algorithm ensure that there will be place for the correct number of sampling points in all the generated sampling patterns . to the contrary ,both ars and js algorithms generate a lot of incorrect patterns . for the high values of variance are only incorrect patterns generated by these two algorithms . in the three cases ( a , c , d ) the best probability density parameter ( fig .[ fig : exp2ep ] ) measured for patterns generated by the angie algorithm is better than for the other two algorithms .additionally , it can be seen in fig .[ fig : exp2unique ] that the generated number of unique correct sampling patterns is in all the four cases significantly higher for the proposed angie algorithm .let us take a closer look on the case b. in this case , the best probability density parameter found for the algorithm ars ( ) is slightly better than the best found for the angie ( ) . still , the number of unique patterns is significantly better for the above values of for angie algorithm , and very most of the patterns generated by the ars are incorrect for .we tried to find a case for which ars and js algorithms would clearly and distinctly outperform the angie , but it turned out to be an impossible task . still though , it is difficult to provide the reader with one gold rule which algorithm should be used . in practical applicationsthere may be a huge number of different sampling scenarios , in this paper we covered only a tiny fraction of examples , and therefore every case should be considered separately . in general, angie algorithm will always generate correct sampling patterns .but if these sampling patterns will have all quality parameters ( especially ) better than sampling patterns generated by the other algorithms , that is an another issue . from our experiencewe claim that indeed , in most of the cases angie is the right choice. however , there might be applications in which , for example , equi - probability of occurrence of every sampling point is a critical matter and other algorithms might perform better . in practical applications ,a numerical experiment should be always conducted to choose a correct pattern generator and to adjust variance value .we prepared a software patterns testing system ( pates ) , which is open - source and available online .this software contains all the three generators considered in this paper plus routines which compute the proposed quality parameters . with this softwarea user is able to test the generators for his own sampling scenario .we have created a graphical user interface to the software ( fig .[ fig : pates - gui ] ) , which makes using the system more intuitive .reproducible research scripts which can be used to produce results from the presented experiments are also available in .in this section we discuss some of the implementation issues of random sampling patterns . in this paper , we focus on offline sampling pattern generation ( fig . [fig : gen_case2 ] ) , where patterns are prepared offline by a computational server and then stored in a memory which is a part of a signal processing system .immediate generation of sampling patterns would require very fast pattern generators which are able to generate every sampling point in a time much shorter than minimum time between sampling points .the angie algorithm ( alg . [ alg : angie ] ) requires a number of floating point computations before every sampling point is computed , therefore very powerful computational circuit would be necessary in real time applications where . in practical applicationsthere is a need to generate sampling patterns .sampling patterns are generated offline ( fig .[ fig : gen_case2 ] ) on a computational server . in naive implementation , alg .[ alg : angie ] is repeated times to generate random sampling patterns . this approach is suboptimal , because computation of initial parameters from equations ( [ eqalg : precomp1 ] ) and ( [ eqalg:2ndprep ] ) ( lines 2 - 3 ) is unnecessarily repeated times . in the optimal implementation lines 2 - 3 are performed only once before a bag of patterns is generated .we implemented the angie algorithm ( naive implementation ) in python .furthermore , we prepared an implementation in c and an optimized implementation in python ( vectorized code ) .all the implementations are available for download at .[ fig : time_of_exec ] shows time needed to generate sampling patterns .parameters of sampling patterns are identical to the parameters used in the experiment described in section [ sec : expsetup ] .the average sampling frequency is swept from 10 to 100 , and the duration of the patterns is kept fixed .measurements were made on an intel core i5 - 3570k cpu , and a single core of the cpu was used .the angie algorithm operates mostly on integer numbers , and therefore it requires maximally only three floating point operations p. sampling point .the algorithm time complexity vs. the average sampling frequency of a pattern is ( consider the logarithmic vertical scale ) , because lines 5 - 13 in alg .2 are repeated for every sampling point which must be generated .as expected , the optimized vectorized python / optimized c implementation is much faster than the naive python implementation .the analog - to - digital converter ( adc ) driver is a digital circuit which triggers the converter according to a given sampling pattern .the maximum clock frequency of the driver determines the minimum grid period .detailed construction of the driver depends on the used adc because the driver must generate specific signals which drive the adc .a simple driver marks the sample now signal every time the grid counter reaches a value equal to the current sampling time point .such a driver was implemented in vhdl language .the structure of the driver is shown in fig .[ fig : driver ] . due to the internal structure of the control circuit ,the grid period is eight times longer then the input clock period .table 2 contains results of synthesis of the driver in four different xilinx fpgas ..maximum clock values and minimum grid periods of an implemented driver in different xilinx fpgas [ cols="^,^,^ " , ] sampling patterns are read from a rom .the amount of memory used to store a sampling pattern [ in bytes ] is : where is the number of grid points in a pattern and is the number of sampling points in a pattern .depending on the available size of memory , different numbers of sampling patterns can be stored .[ fig : errpdf_vs_memory ] shows the relation between the memory size and the probability density parameter ( [ eq : pdfpar ] ) computed for the proposed angie algorithm .the parameters of the sampling patterns are identical to the parameters used in the experiment described in section [ sec : expsetup ] , although four different average sampling frequencies are used .as expected , the higher the average sampling frequency of patterns , the better the distribution of probability density function ( parameter is lower ). the higher the average sampling frequency of patterns , the more the memory needed to achieve the best possible probability density parameter . if the available memory is low , the probability density function becomes less equi - probable .this paper discussed generation of random sampling patterns dedicated to event - driven adcs . constraints and requirements for random sampling patterns and pattern generators were discussed .statistical parameters which evaluate sampling pattern generators were introduced .we proposed a new algorithm which generates constrained random sampling patterns .the patterns generated by the proposed algorithm were compared with patterns generated by the state - of - the - art algorithms ( jittered sampling and additive random sampling ) .it was shown , that the proposed algorithm performs better in generation of random sampling patterns dedicated to event - driven adcs .implementation issues of the proposed method were discussed .9 h. s. shapiro , r. a. silverman , `` alias free sampling of random noise '' , _ ieee trans .info . theory _147152 , ( 1960 ) h. g. feichtinger and k. grchenig and t. strohmer , `` efficient numerical methods in non - uniform sampling theory '' , _ numerische matematik _ , vol .423440 , ( 1995 ) i. homjakovs , m. greitans , r. shavelis , `` real - time acquisitions of wideband signals data using non - uniform sampling '' , _ proc .ieee eurocon 2009 _ , pp .11581163 , saint - petersburg , may .hui - qing liu , `` ads82x adc with non - uniform sampling clock . '' , _ analog applications journal _ , ( 2005 ) m. wakin , s. becker , e. nakamura , m. grant , e. sovero , d.ching , j. yoo , j. romberg , a. emami - neyestanak , e. candes , `` a nonuniform sampler for wideband spectrally - sparse environments '' , _ ieee trans .topics circuits syst .2(3 ) , ( 2012 ) e.j .cands and m. b. wakin , `` an introduction to compressive sampling '' , _ ieee signal process . mag .25(2 ) , pp . 2130 , ( 2008 ) j. laska , s. kirolos , y. massoud , r. baraniuk , a. gilbert , m. iwen and m. strauss , `` random sampling for analog - to - information conversion of wideband signals '' , _ proc .ieee dallas circuits and systems workshop ( dcas ) _ , pp .119122 , dallas , usa , ( 2006 ) r.g .baraniuk , m. davenport , r. devore , m. wakin , `` a simple proof of the restricted isometry property for random matrices '' , _ constructive approximation _ , vol .28(3 ) , pp . 253 - 263 , ( 2008 ) analog devices ( 2013 ) .`` a / d converters .analog devices.'',[online ] available : ` http://www.analog.com/en/analog-to-digital-converters/ad-converters/products ` ` /index.html ` b. le , t. w. rondeau , j. h. reed , w. bostian , `` analog - to - digital converters . a review of the past , present , and future . '' , _ ieee sig .proc . mag .22(6 ) , ( 2005 ) xilinx ( 2012 ) .`` 7 series fpgas and zynq-7000 all programmable soc xadc dual 12-bit 1 msps analog - to - digital converter'',[online ] available : ` http://www.xilinx.com/support/documentation/user_guides ` ` /ug480_7series_xadc.pdf ` f. marvasti , `` spectral analysis of irregular samples of mu1tidimensional signals , '' , _ presented the 6th workshop on multidim .signal processing _ ,pacific grove , california , ( sep . 1989 ) a. c. gilbert , m. j. strauss , and j. a. tropp , `` a tutorial on fast fourier sampling '' , _ ieee signal process . mag ._ , vol . 25 ,57 - 66 , ( 2008 ) f. marvasti , `` nonuniform sampling , theory and practice '' , _ springer science + business media _ ,( 2001 ) , isbn : 978 - 1 - 4613 - 5451 - 2 , new york , usa j.j .wojtiuk , `` randomized sampling for radio design '' , _ phd thesis _ ,university of southern australia , ( 2000 ) y. p. lin , and p.p .vaidyanathan , `` periodically nonuniform sampling of bandpass signals '' , _ ieee trans .circuits syst .ii _ , vol .45(3 ) , pp . 340351 , ( 1998 ) i. bilinskis , a. mikelsons , `` randomized signal processing . '' , _ prentice hall _ , ( 1992 ) ,isbn : 978 - 0 - 137 - 51074 - 0 , cambridge , uk f. papenfu , y.artyukh , e. boole , d. timmermann , `` nonuniform sampling driver design for optimal adc utilization '' , _ proc .internal symposium on circuits and systems , 2003 ( iscas03 ) _ , vol .516519 , bangkok , thailand , ( 2003 ) m. ben - romdhane , c. rebai , p. desgreys , a. ghazeli , p. loumeau , `` pseudorandom clock signal generation for data conversion in a multistandard receiver '' , _ proc .conference on design and technology of integrated systems in nanoscale era _ , pp .1-4 , tozeur , tunisia , ( 2008 ) p. vandewalle , j. kovacevic , and m. vetterli , `` reproducible research in signal processing [ what , why and how ] '' , _ ieee signal process . mag .26(3 ) , pp .3747 , ( 2009 ) aalborg university ( 2013 ) .`` irfducs project '' , [ online ] available : ` http://www.irfducs.org/pates/ `the work is supported by the danish council for independent research under grant number 060202565b .
|
random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems . nowadays , event - driven analog - to - digital converters make random sampling feasible in practical applications . a process of random sampling is defined by a sampling pattern , which indicates signal sampling points in time . practical random sampling patterns are constrained by adc characteristics and application requirements . in this paper we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications . furthermore , we propose a new random pattern generator which copes with strict practical limitations imposed on patterns , with possibly minimal loss in randomness of sampling . the proposed generator is compared with existing sampling pattern generators using the introduced statistical methods . it is shown that the proposed algorithm generates random sampling patterns dedicated for event - driven - adcs better than existed sampling pattern generators . finally , implementation issues of random sampling patterns are discussed . * keywords : * analog - digital conversion , compressed sensing , digital circuits , random sequences , signal sampling
|
one of the central problems in quantum science and technology is the estimation of an unknown quantum state .quantum state tomography as the procedure of experimentally determining an unknown quantum state has become a standard technology for verification and benchmarking of quantum devices .two key tasks in quantum state tomography are data acquisition and data analysis . the aim of data acquisition is to devise appropriate measurement strategies to acquire information for reconstructing the quantum state . then in the step of data analysis , the acquired data is associated with an estimate of the unknown quantum state using an estimation algorithm . in order to enhance the efficiency in data acquisition , it is desired to develop optimal measurement strategies for collecting data .however , an optimal measurement strategy , which is only known for a few special cases , depends on the state to be reconstructed . to circumvent this issue ,many kinds of fixed sets of measurement bases are designed to be optimal either in terms of the average over a certain quantum state space or in terms of the worst case in the quantum state space .for instance , improved state estimation can be achieved by taking advantage of mutually unbiased bases ( mub ) and symmetric informationally complete positive operator - valued measures ( sic - povm ) . for multi - partite quantum systems , mub and sic - povmare difficult to experimentally realize since they involve nonlocal measurements .how to efficiently acquire information of an unknown quantum state using simple measurements that are easy to realize experimentally remains open . for data analysis in tomography ,although many methods , such as maximum - likelihood estimation , bayesian mean estimation , least - squared inversion , have been used to reconstruct the quantum state , this task can be computationally intensive , and may take even more time than the experiments themselves .it has been reported in that using the maximum - likelihood method to reconstruct eight - qubit took weeks of computation .therefore , the development of an efficient data analysis algorithm is also a critical issue in quantum state tomography . in ,a recursive linear regression estimation algorithm was presented which is much more computationally efficient in the sense that it can greatly save the cost of computation as compared to the maximum - likelihood method with only a small amount of accuracy sacrificed . for a given number of copies of the system , in order to improve the tomography accuracy by better tomographic measurements , a natural idea is to develop an adaptive tomography protocol where the measurement can be adaptively optimized based on data collected so far .adaptive measurements have shown more powerful capability than nonadaptive measurements in quantum phase estimation , phase tracking , quantum state discrimination , and hamiltonian estimation .actually , adaptivity has been proposed for quantum state tomography in various contexts .for example , the results on one qubit have demonstrated that adaptive quantum state tomography can improve the accuracy quadratically considering the infidelity index .however , when generalizing their results to -qubit systems , the adaptive tomography protocol will involve nonlocal measurements which are hard to realize in experiments . in this paper , we combine the computational efficiency of the recursive technique of with a new adaptive protocol that does not necessarily require nonlocal measurement to present a new recursively adaptive quantum state tomography ( raqst ) protocol . in our raqst protocol, no prior assumption is made on the state to be reconstructed .the state estimate is recursively updated based on the current estimate and the new measurement data .thus , we do not have to combine all the historical information with the new acquired data to update the estimate as the maximum - likelihood method .thanks to the simple recursive estimation procedure , we can obtain the estimate state in a realtime way , and using the estimate we can adaptively optimize the measurement strategies to be performed in the following step . in our raqst protocol , the measurement to be performed at each step is optimized upon the corresponding admissible measurement set determined by the experimental conditions .it is first demonstrated numerically that our raqst even with the simplest product measurements can outperform the tomography protocols using mubs and the two - stage mub adaptive strategy . for maximally entangled states ,the infidelity can even be reduced to beat the gill - massar bound which is a quantum cramr - rao inequality . moreover , if nonlocal measurements are available , with our raqst the infidelity can be further reduced . for a wide range of quantum states ,the infidelity of our raqst can be reduced to beat the gill - massar bound with a modest number of copies .we perform the two - qubit state tomography experiments using only the simplest product measurements , and the experimental results demonstrate that the improvement of our raqst over nonadaptive tomography is significant for states with a high level of purity .this limit ( very high purity ) is the one relevant for most forms of quantum information processing .a linear regression estimation ( lre ) method for quantum state tomography was proposed in , and the results have shown that the lre approach has much lower computational complexity than the maximum - likelihood estimation method for quantum tomography . here , we further develop this lre method to present a recursively adaptive quantum state tomography protocol that can greatly improve the precision of tomography .we first convert a quantum state tomography problem into a parameter estimation problem of a linear regression model .consider a -dimensional quantum system with hilbert space .let denote a set of hermitian operators satisfying ( i ) and ( ii ) , where is the kronecker function .using this set , the quantum state to be reconstructed can be parameterized as where is the identity matrix and .let , where denotes the transpose operation .a quantum measurement can be described by a positive operator - valued measure ( povm ) , which is a set of positive semidefinite matrices that sum to the identity , i.e. , and . in quantum state tomography , different sets of povms should be appropriately combined to efficiently acquire information of the unknown quantum state .let denote the admissible measurement set , which is a union of povms determined by the experimental conditions .each povm is denoted as . using the set of , elements of the povm can be parameterized as where , and .let when we perform the povm on copies of a system in state , the probability that we observe the result is given by assume that the total number of experiments is , and we perform a measurement described by times .let denote the number of the occurrence of the outcome from the measurement trials of .let , and .according to the central limit theorem , converges in distribution to a normal distribution with mean 0 and variance /n^{(j)} ] .a recursive lre algorithm can be utilized to find the solution of . for completeness, we present the recursive lre algorithm in appendix a. its basic idea is that one only needs to store the best estimate state so far , and then update it recursively using a bunch of new measurement results with a fixed setting .this is quite different from the maximum - likelihood estimation method since there one has to combine all the historical information with the new collected data to update the estimate , which is quite computationally intensive .it has been demonstrated in fig . 1 of that the recursive lre tomography algorithm can greatly reduce the total cost of computation with only a small amount of accuracy sacrificed in comparison with the maximum - likelihood estimation method .as demonstrated in appendix b , when the number of copies of the unknown quantum state becomes large , the only relevant measure of the quality of estimation becomes the mean squared error matrix .the mean squared error matrix depends upon the state ( i.e. , ) to be reconstructed and the chosen povms .thanks to the recursive algorithm , we can obtain the estimate of the state recursively , and then adaptively optimize the povm measurements that should be performed . by doing so , the accuracy of the tomography can be greatly improved .the details of how to adaptively choose povms are presented in appendix c. using the solution in ( [ theta ] ) and the relationship in ( [ rho ] ) , we can obtain a hermitian matrix with .however , may have negative eigenvalues and be nonphysical due to the randomness of measurement results . in this work ,the physical estimate is chosen to be the closest density matrix to under the matrix 2-norm . in standard state reconstruction algorithms , this task is computationally intensive .however , we can employ the fast algorithm in with computational complexity to solve this problem since we have a hermitian estimate with .it can be verified that pulling back to a physical state can further reduce the mean squared error .in this section we present the numerical results .first of all , we would like to stress two advantages of the recursive lre method : ( a ) as we have demonstrated in , the recursive lre method can greatly reduce the cost of computation in comparison with the maximum - likelihood method ; ( b ) the recursive lre algorithm is naturally suitable for optimizing measurements adaptively . the argument for the advantage ( b )can be explained as follows . for state tomographythe optimal measurements generally depend upon the state to be reconstructed . by utilizing the recursive lre algorithm ,we can obtain the estimate of the real state in a computationally efficient way . using the state estimate , the measurements to be performed can be adaptively optimized . in the following ,we perform numerical simulations of two - qubit tomography using only the recursive lre method while with six different measurement strategies : ( i ) standard cube measurements ; ( ii ) mutually unbiased bases ( mub ) measurements ; ( iii ) mub half - half ; ( iv ) known basis " ; ( v ) raqst1 : the admissible measurement set only contains the simplest product measurements ; ( vi ) raqst2 : the admissible measurement set is not limited .each of the tomography protocols ( iii)-(vi ) consists of two stages . in the first stage ,we all use the standard cube measurements . for the mub half - half ,we first perform standard cube measurements on copies and obtain a preliminary estimate via lre , and then measure the remaining half of copies so that one set of the bases is adaptively adjusted to diagonalize and it together with another four sets of bases constitutes a complete set of mub as proposed in . as compared to the mub half - half , for the known basis " , in the second stage , we perform a set of measurements so that one of the five bases of the mub is the eigenbasis of the state to be reconstructed .although it is impossible physically , this is a useful comparison . for the raqst ,we first perform standard cube measurements on copies and obtain a preliminary estimate .then we adaptively optimize the measurement to be performed at each iteration step upon the corresponding admissible measurement set ( see appendix d ) . in raqst1 ,the basic admissible measurement set is the standard cube measurement bases . at each iteration step , we add another set of product measurements obtained by solving a conditional extremum problem to the basic admissible measurement set ( see appendix e ) . in raqst2 , at each iteration step , the set of the eigenbases of the current estimate state is also added into the admissible measurement set .note that the admissible measurement set in raqst2 will involve nonlocal measurements in general if there are more than one particle .the details can be found in appendix d. for the raqst , we need to specify , which is the number of copies measured in the first stage , and the number of the iteration steps such that , where is the number of copies for each povm in the second stage . in principle , the number of the iteration steps in the second stage may depend on the preliminary estimate in the first stage . for simplicity , in this work, we give empirical formulas depending only upon the total number of the copies .note that in raqst1 and raqst2 , the admissible measurement sets are different , and so are their empirical formulas .for raqst1 , , , and for raqst2 , , where returns the maximum integer that is less than or equal to .obviously the formula for the resource distribution for raqst2 applies only when is not too large .we use monte carlo simulations to demonstrate the results .the figure of merit is the particularly well - motivated quantum infidelity , .[ fig0:subfig : a ] depicts average infidelity versus for the maximally entangled state . it can be seen that the average infidelity of the static tomography protocols ( i.e. , ( i ) and ( ii ) ) versus is in the order of . however , the gill - massar bound for the infidelity in two - qubit state tomography is .this can be obtained by combining the equations ( 5.29 ) and ( a.8 ) in ( see appendix [ sec : append : gm bound ] ) .it is clearly seen that , as compared to the static tomography protocols and the adaptive mub half - half , the average infidelity using our raqst protocol can be reduced to beat the gill - massar bound even only with the simplest product measurements . furthermore ,if there is no limitation on the admissible measurement set , the raqst2 can outperform the known basis " tomography , and the average infidelity of raqst2 versus can be significantly reduced to the order of the gill - massar bound , i.e. , . fig .[ fig0:subfig : b ] shows the histogram for raqst over 200 randomly selected pure states and 200 maximally entangled states when the total number of copies is for each random state .random pure states are created using the algorithm in .since all the maximally entangled states are equivalent under local unitary operations , they are randomly selected by applying randomly generated local unitary operators on the same maximally entangled states .we adopt the index to evaluate the performance of our raqst protocol . here , and represent the of the average infidelity between the corresponding estimate and the true state when the standard cube measurement bases and the raqst are utilized , respectively , while is the gill - massar bound .note that if , our adaptive protocol surpasses the standard measurement strategy , while if , our adaptive protocol beats the gill - massar bound . from fig .[ fig0:subfig : b ] we can see that our raqst protocol is particularly effective for the class of maximally entangled states which are important resources in quantum information .[ fig : subfig : a ] depicts average infidelity versus for state , which has purity tr=0.9955 .note that there are kinks in the four curves corresponding to the four different adaptive protocols ( iii)-(vi ) .we can see that each of the four curves can be divided into three segments from left to right . in the first segment ,the infidelity decreases quickly as increases until the infidelity is reduced to the order of the small eigenvalues of the state to be reconstructed , then the curves go into the second segment where the infidelity decreases slowly . afterthe infidelity is smaller than the smallest eigenvalues , the infidelity decreases quickly again as increases .this is because infidelity is hypersensitive to misestimation of small eigenvalues , as pointed out in .hence , we must accurately estimate the eigenvalues that appear to be zero .when the infidelity is in the order of the smallest eigenvalues , it will be hard to estimate them accurately , so the decay rate of the infidelity will become slow . once the infidelity decreases to be smaller than the smallest eigenvalues , we can estimate them more accurately as increases , and then the infidelity decreases quickly .it can be seen that our raqst1 can beat the static tomography protocols and the adaptive mub half - half protocol even with the simplest product measurements .the infidelity can be further reduced by using raqst2 , and when the total copies , the infidelity can be reduced to .[ fig : subfig : b ] shows average infidelity versus different purity when the total number of the copies for each state is .the quantum states are , where and satisfy the results show that when the states have a high level of purity , our raqst1 with the simplest product measurements can beat the mub protocol . however , as the state becomes more mixed ( decreases ) , using mub measurements for state tomography can do better than using the adaptive product measurements .this fact is due to the essential limit of product measurements on mixed states .as pointed out in , nonlocal measurements on a mixed state can extract more information .thus , to reconstruct mixed states , it is better to use nonlocal measurements , e.g. , mub measurements .it is also clear that the infidelity achieved by using raqst2 is much lower than that using mub , and can beat the gill - massar bound for a wide range of quantum states .in this section , we report the experimental results using our raqst protocol for two - qubit quantum state tomography .since it is hard to perform nonlocal measurements in real experiments , we only experimentally implement tomography protocols using ( i ) standard cube measurements and ( v ) raqst1 . as shown in fig .[ experimental_setup ] , the experimental setup includes two modules : state preparation ( gray ) and adaptive measurement ( light blue ) . in the state preparation module , a pair of polarization - entangled photons with a central wavelength at .2 nm is first generated after the continuous ar laser at nm with diagonal polarization pumps a pair of type i phase - matched -barium borate ( bbo ) crystals whose optic axes are normal to each other .the generation rate is about 3000 two - photon coincidence counts per second at a pump power of 60 mw .half - wave plates at both ends of the two single mode fibers are used to control polarization .then , one photon is either reflected by or transmits through a 50/50 beam splitter ( bs ) . in the transmission path ,a qwp is tilted to compensate the phase of the two - photon state for the generation of . in the reflected path , three 446 quartz crystals and a half wave plate with 22.5 used to dephase the two - photon state into a completely mixed state .the ratio of the two states mixed at the output port of the second bs can be changed by the two adjustable apertures for the generation of arbitrary werner state in the form .note that since the coherence length of the photon is only 176 ( due to the 4 nm bandwidth of the interference filter ( if ) ) , much smaller than the optical path difference which is about 0.5 m , two states from the reflected and transmission path only mix at the second bs rather than coherently superpose . in the adaptive measurement module ,the two - photon product measurements are realized by the combinations of quarter - wave plates , half - wave plates , polarizing beam splitters , single photon detectors and a coincidence circuit .the rotation angles of quarter - wave plates and half - wave plates can be adaptively adjusted by a controller according to the analysis of the collected coincidence data on a computer . in the first experiment , as shown in fig .[ fig2:subfig : a ] , we realize raqst1 and standard cube measurements tomography protocols for entangled states with a high level of purity with respect to different number of resources ranging from 251 to 251189 .first , we calibrate the true state using raqst1 with copies so that the infidelity of the calibrated true state is even 10 times smaller than the estimate accuracy achieved at with raqst1 .the purity of the calibrated state is 0.983 .systematic error is crucial in the experiments .beam displacers , which separate extraordinary and ordinary light , act as pbs and have an extinction ratio of about 10000:1 . as the precision of rotation stages of qwps and hwps are 0.01 , the rotation error is determined by the calibration error of optic axes , which is 0.1 in our experiment .phase errors of the currently used true zero - order qwps and hwps are , which dominate the systematic error of practically realized measurements .these error sources induce a systematic error to the estimate state , which can be characterized by its infidelity from the true state .the systematic error is in the order of 10 when the error sources take the above values . for resource number , the systematic error is of the same scale as or even larger than the statistical error due to finite resources ( copies ) . to deal with this problem ,we employ error - compensation measurements to reduce the systematic error to the order of . in error - compensation measurement technique ,multiple nominally equivalent measurement settings are applied to sub - ensembles such that the systematic errors can cancel out in first order .tomography experiments using both raqst1 and standard cube measurements are repeated 10 times for each number of photon resources . in the second experiment , as shown in fig .[ fig2:subfig : b ] , we realize tomography protocols using raqst1 and standard cube measurements for werner states with purities ranging from 0.25 to 0.98 . the purities are changed by adjusting the apertures . since the photon resource for each run of tomography protocols is only 10 , we use 10 copies to calibrate the true state .there are 40 experimental runs and 1000 simulation runs for each of nine werner states . in each raqst experiment , four adaptive steps are used to optimize the measurements .to ensure measurement accuracy , error - compensation measurements are also employed . in both of these two experiments ,our experimental results agree well with simulation results .the improvement of raqst1 protocol over standard cube measurements strategy is significant . according to the simulation results of mub protocol and the experimental results of raqst1 ,even only with the simplest product measurements , our raqst1 can outperform the tomography protocols using mutually unbiased bases for states with a high level of purity .taking into account the trade - off between accuracy and implementation challenge , from fig .[ fig : subfig ] and fig .[ fig2:subfig ] , raqst using the simplest product measurement seems to be the best choice for reconstructing entangled states with a high level of purity .we presented a new recursively adaptive quantum state tomography protocol using an adaptive lre algorithm and reported a two - qubit experimental realization of the adaptive tomography protocol . in our raqst protocol, no prior assumption is made on the state to be reconstructed .the infidelity of the adaptive tomography is greatly reduced and can even beat the gill - massar bound by adaptively optimizing the povms that are performed at each step .we demonstrated that the fidelity by using our raqst with only the simplest product measurements can even surpass those by using mutually unbiased bases and the two - stage mub adaptive strategy for states with a high level of purity .considering the trade - off between accuracy and implementation challenge , it seems that raqst using the product measurements is the best choice for reconstructing the pure and nearly pure entangled states , which are the most important resources for quantum information processing .it is worth stressing that our raqst protocol is flexible and extensible . for any finite dimensional quantum systems ,once the admissible measurement set is given , we can utilize the adaptive measurement strategy to recursively estimate an unknown quantum state . as demonstrated by numerical results ,if nonlocal measurements can be experimentally realized as the breakthrough of the technology , the admissible measurement set can be enlarged , and our raqst protocol can be better utilized accordingly .how to give a more effective empirical formula in the second stage is worthy of further exploring , where the formula depends upon the estimate state of the first stage .this is actually related to the tomography problem wherein some prior information is already known , e.g. , pure entangled states , matrix - product states , low - rank states . by taking full advantage of the prior information , more efficient raqst protocol may be designed .our raqst protocol may have wide applications in practical quantum tomography experiments ._ note added_. after we completed the experiments , recently we became aware of a highly relevant work taking a bayesian estimation approach to realize two - qubit adaptive quantum state tomography using factorized measurements .the authors would like to thank huangjun zhu for helpful discussions about the gill - massar bound for infidelity .the work was supported by the national natural science foundation of china under grants ( nos .61222504 , 11574291 , 61374092 and 61227902 ) and the australian research council s discovery projects funding scheme under project dp130101658 and centre of excellence ce110001027 .first , we transform the linear regression equations ( [ average2 ] ) into a compact form .after times of povms , we can obtain in total linear regression equations .we denote them as [ 1 , , , [ , , , [ 1 , , , [ , , where 12 & 12#1212_12%12[1][0] _ _ ( , , ) * * , ( ) * * , ( ) and , eds ., _ _ , , vol .( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) , ed ., _ _ ( , , ) _ _ , ph.d .thesis , ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) ( ) _ _ ( , , )
|
adaptive techniques have important potential for wide applications in enhancing precision of quantum parameter estimation . we present a recursively adaptive quantum state tomography ( raqst ) protocol for finite dimensional quantum systems and experimentally implement the adaptive tomography protocol on two - qubit systems . in this raqst protocol , an adaptive measurement strategy and a recursive linear regression estimation algorithm are performed . numerical results show that our raqst protocol can outperform the tomography protocols using mutually unbiased bases ( mub ) and the two - stage mub adaptive strategy even with the simplest product measurements . when nonlocal measurements are available , our raqst can beat the gill - massar bound for a wide range of quantum states with a modest number of copies . we use only the simplest product measurements to implement two - qubit tomography experiments . in the experiments , we use error - compensation techniques to tackle systematic error due to misalignments and imperfection of wave plates , and achieve about 100-fold reduction of the systematic error . the experimental results demonstrate that the improvement of raqst over nonadaptive tomography is significant for states with a high level of purity . our results also show that this recursively adaptive tomography method is particularly effective for the reconstruction of maximally entangled states , which are important resources in quantum information .
|
the importance of networks in modern science can hardly be underestimated .the network representation , where the elementary units of a system become vertices connected by relational links , has proved very successful to understand the structure and dynamics of social , biological and technological systems , with the big advantage of a simple level of description .the structure of a network can be studied at the global level , focusing on statistical distributions of topological quantities , like degree , clustering coefficient , degree - degree correlations , etc . , or at the local level , disclosing how nodes are organized according to their specific features .topologically , such local organization of the nodes is revealed by the existence of subsets of the network , called communities or modules , with many links between nodes of the same subset and only a few between nodes of different subsets .communities can be considered as relatively independent units of the whole network , and identify classes of nodes with common features and/or special functional relationships .for instance , communities represent sets of pages dealing with the same topic in the world wide web , groups of affine individuals in social networks , compartments in food webs , etc .the problem of identifying communities in networks has recently turned into an optimization problem , involving a quality function introduced by newman and girvan , called modularity .this function should evaluate the `` goodness '' of a partition of a network into communities .the general idea is that a subset of a network is a module if the number of internal links exceeds the number of links that one expects to find in the subset if the network were random .if this is the case , one infers that the interactions between the nodes of the subset are not random , which means that the nodes form an organized subset , or module . technically , one compares the number of links inside a given module with the expected number of links in a randomized version of the network that keeps the same degree sequence . the partition is the better , the larger the excess of links in each module with respect to the random case . in this way ,the best partition of the network is the one that maximizes modularity .optimizing modularity is a challenging task , as the number of possible partitions of a network increases at least exponentially with its size .indeed , it has been recently proven that modularity optimization is an np - complete problem , so one has to give up the ambitious goal of finding the true optimum of the measure and content oneself with methods that deliver only approximations of the optimum , like greedy agglomeration , simulated annealing , extremal optimization and spectral division .we believe that the scientific community has been a little too fast in adopting modularity optimization as the most promising method to detect communities in networks . indeed , all research efforts focused on the creation of an effective algorithm to find the modularity maximum , without preliminary investigations on the measure itself and its possible limitations .only recently a critical examination has been performed , revealing that modularity has an intrinsic resolution scale , depending on the size of the system , so that modules smaller than that scale may not be resolved .this represents a serious problem for the applicability of modularity optimization , especially when the network at study is large .the existence of this bias has also been revealed in the hamiltonian formulation of modularity introduced by reichardt and bornholdt , that leaves some freedom in the criterion determining whether a subset is a module or not .other doubts about modularity and its applicability were raised before the discovery of the resolution limit . in our opinion , the problems of modularity optimization call for a debate about the opportunity to use quality functions to detect communities in networks .this general issue , which has never been discussed in the literature on community detection , is the subject of this paper .we start with an analysis of modularity , where we illustrate its features as well as its limits .such analysis is a valuable guide to uncover the possible problems that arbitrary quality functions may have , to understand what determines these problems and what can be done to solve them .we will see that , while it is easy to define a quality function within classes of partitions with the same number of modules , it is not clear how to compare network splits that differ in the number of modules. this paper reproposes some results of the recent work , carried out in collaboration with dr .marc barthlemy , integrating them with new material and discussion . in section 2we introduce and analyze the modularity of newman and girvan ; in section 3 we deal with the general issue of quality functions and their applicability ; our conclusions are summarized in section 4 .the modularity of a partition in modules of a network with n nodes and l links can be written in different equivalent ways .we stick to the following expression , \label{eq : mod}\ ] ] where the sum is over the modules of the partition , is the number of links inside module and is the total degree of the nodes in module .any method of community detection is bound to start from stating what a community is . in the case of modularity the definition of communityis revealed by each summand of eq .( [ eq : mod ] ) , where we distinguish two terms , and .the term is the fraction of links connecting pairs of nodes belonging to module , whereas represents the fraction of links that one would expect to find inside that module if links were placed at random in the network , under the only constraint that the degree sequence coincides with that in the original graph . in this way ,if exceeds , the subset of the network is indeed a module , as it presents more links than expected by random chance . the larger this excess of links , the better defined the module .we conclude that , within the modularity framework , a subgraph with internal links and total degree is a module if starting from this definition , newman and girvan deduced that the overall quality of the partition is given by the sum of the qualities of the individual modules , which is not straightforward , as we shall see in the next section . if all subsets of the partition are modules , in the sense specified by eq .( [ eq2 ] ) , the modularity of the partition is positive , i.e. . on the other hand , modularity is a bounded function .since each summand can not be larger than the term , one has \leq\sum_{s=1}^{m}\frac{l_s}{l}=\frac{1}{l}\sum_{s=1}^{m}l_s\leq 1 .\label{eq3}\ ] ] in this way , for any network , has a well defined maximum . since the partition into a single module , i.e. the network itself , yields , as in this case and , we conclude that the maximal modularity is non - negative .practical applications suggest that partitions with modularity values of about correspond to well defined community structures . however , these numbers should be taken with a grain of salt : the modularity maximum usually increases with the size of the network at study , so it is not meaningful to compare the quality of partitions of networks of very different size based on the relative values of .moreover , the modularity maximum of a network yields a meaningful partition only if it is appreciably larger than the modularity maximum expected for a random graph of the same size , as the latter may attain very high values , due to fluctuations .let us consider two subsets and of a network .the total degree of each subset is for and for .we want to calculate the expected number of links connecting the two subsets in the null model chosen by modularity , i.e. a random network with the same size and degree sequence of the original network . by construction, the total degrees of and in the randomized version of the network will be the same .each link of the network consists of two halves , or stubs , originating each from either of the nodes connected by the link .the probability that one of the stubs originates from a node of is ; similarly , the probability that the other stub of the link originates from a node of is .so , the probability that a link of the random network joins a node of with a node of is , where the factor two is due to the symmetry of the link with respect to the exchange of its two extremes .as there are links in total , the expected number of links connecting and is now we notice something interesting . in all our discussionwe set no constraint on the parameters , and other than the trivial conditions and , as the total degree of either subset can not exceed the total degree of the network by construction .in particular , and could be much smaller than , and could be smaller than one .to simplify the discussion , we assume that both and have equal total degree , i.e. . in this case, the condition implies , or , equivalently , in this way , if the total degree of either subset is smaller than , the expected number of interconnecting links in the random network would be less than one , so if there is even a single link between them in the original network , modularity would merge and in the same module .this is because the two subsets would appear more connected than expected by random chance .we have made no hypothesis on the subsets : the number of nodes in them does not play a role in our argument , as it does not enter the definition of modularity , as well as the distribution of links inside them . for all we know , and could even be two complete graphs , or cliques , which represent the most tightly connected subsets one can possibly have , as every node of a clique is connected with all other nodes .we found that , regardless of that , optimizing modularity would make them parts of the same module , even if they appear very weakly connected , since they share only one link .some striking consequences of this finding are shown in fig .[ fig1 ] , where we present two schematic examples . in the first example, we consider a network made out of cliques , i.e. graphs with nodes and links .each clique is connected to two others by one link , forming a ring - like structure ( fig .[ fig1]a ) .we have cliques , so that the network has a total of nodes and links .the natural community structure of the network is represented by the partition where each module corresponds to a single clique .the modularity of this partition equals and we would expect that is the maximum modularity for this network . if this is true , should be larger than the -value of any other partition of the network .let us consider the partition where the modules are pairs of consecutive cliques , delimited by the dotted lines in fig .the modularity of this partition is [ cols="^ " , ] we start with a set of nodes and links .we want to distribute the links among the nodes in order to build the ideal `` modular '' network .what kind of network is it ?intuitively , we expect that the network presents groups of nodes with the highest possible density of links between nodes of the same group , and the smallest possible number of connections between the groups .we assume that we have identical groups , for symmetry reasons .the highest density of links inside each group is attained when the latter is a complete graph .the ideal configuration should have interconnecting links , which is the minimum number of links necessary to keep the network connected .for the sake of symmetry , we instead use links , so that the cliques can be arranged in the ring - like structure schematically illustrated in fig .. the average degree of the network is fixed by construction to the value , and all nodes essentially have the same degree , with slight differences depending on their being connected or not to nodes of a different group . in this way, the cliques comprise nodes is in general not integer .therefore the statement means that some cliques have + 1 ] nodes , with $ ] the integer part of , so to respect the constraint on the total number of nodes and links of the network . ] .the number of cliques is then approximately , which is an important constraint on the desired quality function , and a useful indication on how to group nodes into modules . to test a quality function, one could identify the network partition that delivers the highest possible value of the measure , and check whether it coincides with the ideal partition that we have derived , or in which respect it is different from it . in the case of newman - girvan modularity ,such partition can be easily determined .we proceed in two steps : first , we consider the maximal value of modularity for a partition into a fixed number of modules ; after that , we look for the number that maximizes . again , the best configuration is the one with the smallest number of links connecting different modules .for simplicity we shall assume that there are bridges between the modules , so that the network resembles the one in fig .the modularity of such a network is , \label{eq10}\ ] ] where the maximum is reached when all modules contain the same number of links , i.e. .its value equals =1-\frac{m}{l}-\frac{1}{m}. \label{eq12}\ ] ] we have now to find the maximum of when the number of modules is variable . for this purposewe treat as a real variable and take the derivative of with respect to which vanishes when .we conclude that the network with the highest possible modularity comprises modules , with each module consisting of about nodes and links .the resolution scale of modularity optimization emerges at this stage , where we see that the best possible partition requires modules with total degree of about .the modules in general are not cliques , at variance with those of the ideal network partition .this is due to the fact that the number of nodes inside the modules does not affect the value of the modularity of a partition .modifications of modularity do not improve the situation . as an example , we introduce a modified modularity , that differs from the original measure of newman and girvan in that the quality of the partition is not the sum , but the average value of the qualities of the modules .this is _ a priori _ a meaningful definition and its expression reads , \label{eq14}\ ] ] where the symbols have the same meaning as in eq .( [ eq : mod ] ) .again , we wish to find the network partition that delivers the highest possible value for .the procedure adopted for the original modularity applies in this case as well until eq .( [ eq12 ] ) , which now takes the form whose derivative with respect to is which vanishes when .so , the ideal network partition for the new modularity is a split in two communities of the same size , independently of the number of nodes and links of the network .the resolution scale of is then of the order of the size ( in degree ) of the two communities , which is .because of that , the optimization of delivers partitions in a small number of modules , which means that the network is examined at a coarser level with respect to the original modularity and the situation is much worse than before .quality functions allow to convert the problem of community detection into an optimization problem .this has big advantages , potentially , because one can exploit a wide variety of techniques and methods developed for other optimization problems . in this paperwe used the modularity of newman and girvan as a paradigm to discuss the problem of the definition of a quality function suitable for community detection .we have seen that modularity can not scan the network below some scale , and that this may leave small modules undetected , even when they are easily identifiable .moreover , the identification of modules may be affected by the time evolution of the network , due to the fact that modularity s resolution scale varies with the size of the network .the main issue is how to build the quality function starting from the expression of the quality of a single community .we have seen that there are reliable ways to do it , when one wants to find the best partition in a fixed number of modules .modularity itself , for instance , is a possible prescription . instead, the problem of discriminating whether a partition of a network in modules is better than a partition in modules , with is more difficult to control , and so far unsolved .as long as this problem remains open , using the optimization of quality functions to identify communities will be unjustified .
|
community structure represents the local organization of complex networks and the single most important feature to extract functional relationships between nodes . in the last years , the problem of community detection has been reformulated in terms of the optimization of a function , the newman - girvan modularity , that is supposed to express the quality of the partitions of a network into communities . starting from a recent critical survey on modularity optimization , pointing out the existence of a resolution limit that poses severe limits to its applicability , we discuss the general issue of the use of quality functions in community detection . our main conclusion is that quality functions are useful to compare partitions with the same number of modules , whereas the comparison of partitions with different numbers of modules is not straightforward and may lead to ambiguities .
|
astrophysical plasmas like stellar atmospheres , gaseous nebulae or accretion disks are not in any sense closed systems , as they emit photons into interstellar space .therefore , the thermodynamic state of such plasmas is in general not described well by the equilibrium relations of statistical mechanics and thermodynamics for local values of temperature and density , i.e. by local thermodynamic equilibrium ( lte ) .the presence of an intense radiation field , which in character is very different from the equilibrium planck distribution , results in deviations from lte ( non - lte ) because of strong interactions between photons and particles . the thermodynamic state is then determined by the principle of statistical equilibrium .all microscopic processes that produce transitions from one atomic state to another need to be considered in detail via the rate equations .a fundamental complication is that the distribution of the particles over all available energy states the level populations or occupation numbers in turn affect the radiation field via the effects of absorptivity and emissivity on the radiation transport .what is required is a self - consistent simultaneous solution of the radiative transfer and statistical equilibrium equations .a _ model atom _ is a collections of atomic input data required for the numerical solution of a given non - lte problem .it is a _ mathematical - physical approximation _ to the quantum - mechanical system of a real atom , and its interaction with radiation and with other particles in a plasma .a model atom comprises , on one hand , data to specify the structure of the atom / ion like energy levels , statistical weights and ionization potentials . on the other hand , the transitions among the individual states need to be described , requiring oscillator strengths , cross - sections for photoionization and collisional excitation / ionization , etc . the number of levels in a model atom amounts typically to several tens to several hundred in modern work , and the number of transitions from hundreds to many ( ten-)thousands . as only a very limited amount of atomic datahave been determined experimentally up to now mostly energy levels , wavelengths and oscillator strengths , most of the data have to be provided by theory .large collaborative efforts have been made to compute the data required in astrophysical applications via _ ab - initio _ methods .the opacity project ( op ; seaton ; seaton et al . ) and its successor the iron project ( ip ; hummer et al . ) provided enormous databases of transition probabilities and cross - sections for photoionization and excitation via electron impact .many smaller groups and individuals have contributed additional data , most notably kurucz ( see e.g. kurucz ) in a tremendous effort lasting already for about three decades . _ab - initio _data for radiative processes between levels of principal quantum number are available for most of the ions of the lighter elements up to calcium , and for iron . _ ab - initio _data for excitation via electron collisions are by far less complete , typically covering transitions up to or 4 for selected ions of the lighter elements and for iron .reliable data for other members of the iron group and for the heavier elements are only selectively available .the remainder of data still the bulk by number has to be approximated for practical applications .consequently , the starting point for the construction of model atoms for many elements of astrophysical interest has improved tremendously since the mid-1980s .nevertheless , building realistic model atoms is neither an easy nor a straightforward task .it is a common misconception that non - lte _ per se _ brings improvements over lte modelling . a careful lte analysis of well - selected lines _can _ be more reliable than a non - lte study of the ` wrong ' lines with an inadequate model atom .on the other hand , computations using a realistic model atom _ will _ improve over lte provided that the other ingredients of the modelling are also realistic .the independence of microscopic processes from environment at least under not too extreme conditions provides a tool to assess the quality of model atoms by comparison with observation .comprehensiveness and robustness of a model atom are given when it reproduces the observed line spectra over a wide range of plasma conditions .standard stars should serve as test ` laboratories ' , covering a wide range of effective temperature and surface gravity ( particle density ) . in the following we discuss practical aspects of the construction of model atoms for non - lte line - formation calculations of trace elements in stellar atmospheres .guidelines and suggestions are given how to build up robust and comprehensive model atoms , and how to test them thoroughly .non - lte line - formation calculations solve the coupled statistical equilibrium and radiative transfer equations for a prescribed model atmosphere , which may itself be in lte or non - lte .the computational expenses are therefore lower than for full non - lte calculations that solve also for the atmospheric structure .hence , comprehensive model atoms may be treated in great detail , which turns out to be a crucial advantage of restricted non - lte calculations .one of the most important quantities for the comparison with observed spectral lines which is the basis of quantitative spectroscopy are _ accurate occupation numbers _ for the levels involved in the transitions .these have to be determined in general by solution of the rate equations of statistical equilibrium ( though detailed equilibrium may turn out a valid approximation _ a posteriori _ ) where the and are the radiative and collisional rates , respectively , for the transitions from level to level .radiative upward rates are given by where is an atomic cross - section for bound - bound or bound - free processes , the mean intensity , planck s constant and the frequency . in the case of collisional processes the upward - rates are given by where is the electron density ( for the moment we assume that collisions with heavy particles can be neglected , see sect .[ collisions ] ) , the velocity and the ( in general maxwellian ) velocity distribution of the colliding particles .the corresponding downward rates are derived from detailed - balancing arguments , requiring correction for stimulated emission in the case of the radiative downward rates .inspection of eqns .[ rateeq][colrates ] shows which input quantities need to be reliably known in order to facilitate an accurate determination of the level populations : i ) the local temperatures and particle densities , _ and _ ii ) the non - local radiation field , _ and _ iii ) accurate cross - sections , _ and _ iv ) all transitions relevant for the problem have to be taken into account .any shortcomings in these will affect the final accuracy that can be obtained in the modelling .items i ) and ii ) depend on the prescribed model atmosphere used for the restricted non - lte calculations .this has to give a fair description of the _ real _ temperature gradient and the density stratification in the star s atmosphere under investigation .particular care has to be invested in the stellar parameter determination ( effective temperature , surface gravity ) .this has to account for a self - consistent treatment of quantities which are often thought to be of secondary nature ( microturbulence , metallicity , helium abundance , etc . ) , see nieva & przybilla ( this volume ) .good knowledge of the model atmosphere code is a prerequisite for successful non - lte line - formation computations for individual cases use of published model grids allows only to scratch the surface of the problem .items iii ) and iv ) are related to the model atom .nowadays , the question is often not _ whether _ to use _ ab - initio _data for the model atom construction , but _ which _ of the available datasets to adopt .this is a matter of experience and familiarity with atomic physics .the quality of agreement of _ ab - initio _ results with available experimental data should certainly guide the decision .a first important step is to check how well the _ ab - initio _ calculations reproduce the observed energies of the levels in an atom / ion .a comparison of observed and computed oscillator strengths and cross - sections for reactions gives further indications ( see for example reviews by williams ; kjeldsen ) .the latter will typically be possible only for ground and metastable states , but the opportunity for constraining the accuracy of the theoretical data should not be missed .another criterion is the agreement between length and velocity forms of oscillator strengths to verify the internal consistency of the _ ab - initio _ calculations .eventually , different model atoms may be constructed using the alternatives as input data to decide empirically which dataset should be preferred for the practical work .experience is also the key in deciding how extended a model atom should be and which transitions should be considered .nature realises all possibilities , but we have to handle the mathematical solution of a set of equations describing the physics packed into a ( restricted ) model . in particular , the numerical solution of the set of linear equations ( [ rateeq ] ) requires to be handled carefully .precautions need to be taken to keep the numerical problem well - conditioned and the algorithms stable . in practicethis means that some transitions may better be ignored in a model atom , or the number of levels may be restricted in order to obtain meaningful results .larger model atoms accounting for more transitions are therefore not necessarily ` better ' , even if the individual atomic input data are of high quality .finally , execution times of the model computations are also important for the practical work .they should not be excessive , requiring a certain compactness of model atoms even for non - lte line - formation computations on prescribed model atmospheres .hence , the construction of model atoms is effectively a highly complex optimisation problem .a compromise needs to be found between comprehensiveness , accuracy , stability and efficiency .the first step in the construction of a model atom is to decide on the extension of the model : which ions , which energy levels should be included , and how ?this depends on the specific non - lte problem and may range from a few levels for studies of a resonance line to many hundred including packed ` superlevels ' if a reproduction of the complex spectra of e.g. iron group species is aspired . in the following we concentrate on the general strategy for the construction of comprehensive model atoms , which are able to reproduce practically the entire observed spectra of an ion over a wide parameter range . in a model of the bright supergiant ( b8ia ) as a function of for hi model atoms of different complexity : for 10 , 15 , 20 , 25 , 30 , 40 , 50 levels ( dotted , dashed - dotted , dash - dot - dot - dotted , long dashed , full , dashed , full thick lines ) .the lower right panel displays the behaviour of all levels in the 50-level model atom .note the rydberg states asymptotically approaching the hii limit .see the text for details . from przybilla & butler ( ) . ]the fundamental parameter that determines which ions should be included in a model atom is the effective temperature , as this determines the energetics of the microscopic processes .the term structure of the ions provides a second criterion .hence , the main ionization stage should be adequately represented , plus usually two or three minor ionic species which may comprise in fact the ones of particular interest .e.g. , the main ionization stage of carbon in a b0v star ( with ) is ciii in the line - formation region , but features of the minor species cii and civ are also present in the optical spectra .consequently , a comprehensive model atom should consider cii - iv , plus the ground level of cv .the latter has an enormous energy gap between the ground and the first excited level , of about 300ev , such that from atomic structure considerations excited cv levels can be safely ignored in the model atom .a lithium model atom for use with solar - type stars can be kept simple detailed lii + the liii ground level for the same reason , despite liii dominating by far in terms of population . concerning the choice of levels to include in a comprehensive model atom there are two objective criteria available , which are tightly coupled .one is the energy gap between the energetically highest level of the ion and the continuum , as defined by the ground level of the next higher ionization stage . as a general rulethe gap should be less than to ensure an accurate determination of ionization fractions . in order to keep a model atom robustit is recommended to include energetically higher levels than the minimum necessary to cope with a given problem .non - lte studies of the mgii line may serve as an example : the line is observable from f - type to mid o - type stars . for explaining the behaviour of the line in early - type stars ( mihalas ) it may be sufficient to consider levels up to =5 or 6 using the criterion above , as the majority of the colliding particles will be able to facilitate ionization ( see fig .[ maxwell ] ) . however , the same model atom will not be useful for analyses of a - type stars , as only a small fraction of electrons in the high - velocity tail of the maxwell distribution will be able to overcome the reaction threshold . in that case ,completeness up to =8 would be required , etc .the second criterion is the convergence of the behaviour of level departure coefficients with increasing model complexity ( for a given set of transition data ) .an example is shown in fig .[ departureconvergence ] , for hydrogen model atoms considering levels up to =10 to 50 .models with a too low a number of levels can show a different behaviour at line - formation depths than the more complex models , resulting in inaccurate predictions . an alternative formulation of this criterion is via the convergence of the line source function ( e.g. sigut ) . for many elements fine - structure states of a term may safely be combined into one level representing the term .this comprises , in particular , the cases that are approximated well by -coupling .collisions couple the individual sub - levels tightly because of their small energy separations , i.e. they are in lte relative to each other .a similar opportunity for simplification of the model atom opens up for levels at higher excitation energies .the energy separations of terms decrease with increasing angular quantum number for the same , and in general with increasing .eventually , they may be safely grouped into one level with appropriate statistical weight .this helps to keep the number of explicit non - lte levels to be treated for elements up to about calcium below , even if several ionization stages are treated simultaneously .an example of a comprehensive model atom structure is visualised in fig . [ grotrian ] in form of a grotrian diagram , for mgi . for the heavier elements with complex electron structure like the iron group elements it may be worthwhile to consider regrouping a multitude of levels with similar properties into ` superlevels ' ( and the transitions into ` superlines ' ) , a concept first introduced by anderson ( ) .the _ non - local _ character of the radiation field drives the stellar atmospheric plasma _ out of detailed equilibrium _ : photons can travel large distances before interacting with the particles , coupling the thermodynamic state of the plasma at different depths in the atmosphere .this affects the excitation and ionization of the material .radiative transitions obey _ selection rules_. changes in the internal energetic state of an atom / ion can occur by the absorption / emission of photons , giving rise to spectral lines .the strength of a spectral line is basically determined by the number of absorbers ( or emitters ) and the line absorption cross - section , which is given by where is the electron charge , the electron mass and the oscillator strength . is the line absorption profile ( the emission line profile is identical assuming complete redistribution ) , which can be approximated well by a doppler profile in most cases when the coupled radiative transfer and statistical equations are solved to determine the level populations in the restricted non - lte approach .the accuracy of the oscillation strengths used for the model atom construction will be a limiting factor for analyses . fortunately , high - accuracy data is available in many cases[other ] that are required in lte investigations .] , obtained both from experiments as well as from _ ab - initio _ calculations .it is worthwhile to cross - check available data sources .newer data are not necessarily better , even if a apparently more advanced method was used for their determination . as usual, the devil is in the details .an example is shown in fig .[ rbbplot ] .oscillator strengths for cii from the op ( -matrix method in the close - coupling approximation assuming -coupling ) are compared to data of nahar ( ) obtained with the ( in principle superior ) breit - pauli -matrix ( bprm ) method and the smaller number of high - precision data from application of the multiconfiguration hartree - fock ( mchf ) method .the in general good agreement between the op and the mchf data ( with few exceptions they follow the 1:1 relation ) and the considerable scatter in the comparison of op and nahar s -values was one of the factors to disregard this particular set of bprm data from the model atom construction in that case. however , the question which of the available data should be used for the construction of model atoms has no simple answer in general .limitations on the number of transitions that can be considered explicitly in the radiative transfer calculations may be imposed by the numerical method chosen for solving the equation systems .complete linearisation techniques are much more restricted than accelerated lambda iteration ( ali ) techniques .the most important transitions ( high -value , location at wavelengths with non - negligible flux ) will need to be considered in the former case while virtually all line transitions may be accounted for in the latter case , see fig .[ grotrian ] for an example . and 4 as a function of wavelength . from np08 . ]the availability of photoionization cross - sections has improved enormously in the past 20 years , mostly because of the efforts made in the op and ip .like for oscillator strengths , the quality of the different data sets should be evaluated for the model atom construction .a comparison of photoionization cross - sections from two different calculations is shown in fig .[ photocross ] , for two excited levels in cii .maybe surprisingly , some details can be relevant for the model atom construction , while obvious discrepancies may be not .the small shifts in the resonance structure near the ionization threshold of the energetically relatively low - lying 2 state of cii ( see fig .[ testing ] for a grotrian diagram ) can be important for the rate determination , as they occur near the flux maximum of ob - type stars .the more than two orders - of - magnitude differences in the high - energy tail of the cross - section for the 4 level ( occurring because of the different targets in the two _ ab - initio _ calculations ) are irrelevant on the other hand , because of the low flux at these short wavelengths .it has to be decided by comparison of the results from different model atom realisations which of the available data should finally be selected , see sect .[ tests ] for an example .as always , some guidance may also be obtained by comparison of the different theories with experiments . the op and ip have provided _ ab - initio _data for photoionization from levels with typically and .missing data for levels of higher or may be rather safely approximated by hydrogenic cross - sections ( mihalas , p.99 ) for the model atom construction , in particular as these levels are usually packed , see sect . [ structure ] . here, denotes the charge of the ion and the bound - free gaunt factor , which is of order unity at ionization threshold .the threshold cross - section is given by cm .inelastic collisions with particles can also lead to excitation and ionization of atoms / ions .the velocity distribution of particles in stellar atmospheres is in practically all cases maxwellian , determined by the _ local _ plasma temperature .collision processes will therefore drive the plasma _ towards lte_. contrary to radiative transitions , no selection rules apply .typically , only electron collisions are considered , as the thermal velocity and hence the collision frequency of heavy particles is much smaller , e.g. by a factor for hydrogen .this is in particular valid for all environments where the stellar plasma is sufficiently ionized .heavy particle collisions may become important in special cases , however , like for cool metal - poor stars .the main aspect of collisional excitation by electron impact is certainly the thermalising effect on level populations in general , dampening out departures from detailed equilibrium imposed by the radiative processes .curiously enough , collisional excitation can also drive individual level occupations _ out of lte _ under special circumstances .this occurs for cases where a level is collisionally tightly - coupled to another level that experiences strong non - lte departures .examples are energetically close levels from different spin systems , one being metastable ( a radiative decay to the ground state is prohibited by selection rules ) .a prominent example for this coupling are the 3 and 3 levels of oi ( przybilla et al . ; fabbian et al . ) .larger sets of collisional excitation data for transitions up to typically =3 or 4 are available now , e.g. from the ip .a good part of the data are published in the astrophysics literature , but the reader should be aware that much more data can be found in the physics literature because of their relevance for fusion research and technical applications .most useful for practical applications are tabulations of thermally - averaged effective collision strengths where is the energy of the outgoing electron and the collision strength , which is symmetric as well as dimensionless , .effective collision strengths facilitate an easy evaluation of transition rates , which are proportional to . while _ ab - initio _ data are of highest relevance for the construction of model atoms , one has to resort to approximation formulae for the vast majority of possible transitions .different descriptions are available from the literature , a comparison can be found in mashonkina ( ) . in the following we concentrate onthe two most commonly used approximations . for optically allowed and forbidden transitions in cii in the near - threshold region as a function of incident electron energy ( fine - structure data from wilson et al . ) . for comparison , is indicated by a dotted line . ]the semiempirical formula of van regemorter ( ) allows rates for radiatively permitted transitions to be evaluated in terms of the oscillator strength , \exp(-u_0 ) \gamma(u_0)\ , , \label{vanregemorter}\ ] ] where is the ionization potential of hydrogen scaled by , ( is the threshold energy for the process ) , and $ ] for ions ( is the first exponential integral ) .the parameter is about 0.2 if the principal quantum number changes during the transition and about 0.7 if not . for neutral atoms has a different form ( see auer & mihalas ) .excitation rates of radiatively forbidden transitions are often evaluated according to the formula of allen ( ) , where is a temperature - dependent factor in the collision rate as defined by mihalas ( , p. 133 ) .typically , is assumed for forbidden transitions .note that eqns .[ vanregemorter ] and [ allen ] give , at best , order - of - magnitude estimates .comparisons of _ ab - initio _ data with the approximations should be made whenever possible to evaluate the true uncertainties that can be expected .an example is shown in fig .[ omega ] .differences of up to several orders of magnitude are found , and the quantum - mechanical data can show pronounced resonance structure .it is therefore advisable to investigate the available _ ab - initio _data for trends and regularities .extrapolations can be made based on these in order to improve the approximate input data for the model atom construction .the rates of collisional ionization by electron impact are mostly affected by the availability of electrons with energies high enough to overcome the threshold for the reaction .this makes collisional ionization from the ground state and energetically low - lying levels rather inefficient , as only few electrons in the high - velocity tail of the maxwell distribution are available for this . on the other hand ,collisions become a dominant factor for the coupling of high - lying levels to the continuum .unfortunately , cross - sections for ionization by electron impact are among the least - constrained atomic data .experiments usually cover ionization from the ground state only , and the reader is referred to the atomic physics literature for extracting data for a specific problem . on the theoretical side, few methods have been successfully applied .the fundamental challenge which distinguishes collisional ionization from excitation is the fact that the coulomb interaction between each of the two outgoing electrons and the residual ion is present even at large distances .recently , breakthrough results have been obtained by use of the converging close - coupling method .several fundamental processes have been modelled accurately , providing cross sections that closely reproduce the available experimental data in these cases , see bray et al .( ) for a review .however , in the majority of cases one has to rely on more simple approaches .an often used approximation formula for quantifying collisional ionization was given by seaton ( ) , which expresses the collisional cross - section in terms of the photoionization cross - section , yielding a rate where is the threshold photoionization cross - section ( sect .[ photoionizations ] ) , and is of the order 0.1 , 0.2 , and 0.3 for , 2 , and , respectively . again , eqn .[ seaton ] provides , at best , an order - of - magnitude estimate .a comparison of collision rates calculated with eqn .[ seaton ] with those evaluated by an empirical analytical expression ( drawin ) indicates that the uncertainties may be sometimes much larger ( mashonkina ) . the seaton formula ( and analogous simple approximations )should therefore be applied with caution .excitation and ionization by inelastic heavy particle collisions are usually considered unimportant in comparison to electron collisions , which occur more frequently .however , the ratio of hydrogen atoms to electrons may easily exceed a factor 10 in cool stars , in particular in metal - poor objects .in such a case h collisions may become a dominant thermalisation process , which must not be neglected .characteristic collision energies in cool star photospheres are of the order , i.e. they are comparable to typical atomic transition energies .consequently , the near - threshold behaviour of the cross - sections is most important for the determination of the collision rates .laboratory measurements or _ ab - initio _ calculations of cross - sections near threshold are scarcely found in literature for these neutral particle collisions .some progress has been made recently for the na + h system ( fleck et al . ; belyaev et a. , ) , showing that the collision rates can differ by several orders of magnitude compared to simple approximations ( see the discussion below ) .similar discrepancies were also found for the li + h system ( belyaev & barklem ) . in face of the absence of reliable data for practically all cases of interest , one has to resort to the use of approximations for the description of inelastic hydrogen collisions .most work relies on the steenbock & holweger ( ) approximation , who generalised drawin s formulae ( drawin ; based on thomson s classical theory ) , originally developed for collisions between identical particles .the maxwell - averaged formulae for excitation and ionization of particle species by collision with h becomes with denoting the reduced mass .for collisional excitation and and for collisional ionization and , where denotes the energy difference between upper and lower state of the transition and and are the ionization energy of hydrogen and atomic species ; is the oscillator strength , is an efficient oscillator strength for ionization and the number of equivalent electrons . the function is given by .note that eqn .[ sh ] does not apply to optically forbidden transitions .takeda ( ) suggested to relate the hydrogen collision rate with the electron collision rate via assuming _ ad - hoc _ a similarity of cross - sections for both cases ( is usually evaluated with ) .here , denotes the number density of neutral hydrogen. often the results are scaled by a factor .a value of is equivalent to no hydrogen collisions , enforces lte . in general, is constrained empirically by demanding that the scatter in abundance as determined from the entire sample of lines of a species should be minimised .equations [ sh ] and [ tak ] were considered appropriate to provide an order - of - magnitude estimate for a long time , but in view of the few _ ab - initio _ calculations available now , this assessment appears too optimistic .this confirms earlier indications of an underestimation of the real uncertainties , from the comparison with other approximations , see e.g. mashonkina ( ) .barklem ( ) investigated the uncertainties for the most simple case , h + h , in detail .the steenbock & holweger formula remains in use for astrophysical applications in view of the lack of other reliable data , despite all the warning evidence .in view of this , efforts should be made to determine proper scaling factors in order to minimise the impact on the accuracy of analyses .this could be made by extensive investigations following the recommendations for testing model atoms in sect .[ tests ] , covering the parameter space comprehensively ( wide range of effective temperature , densities and metallicity ) .consideration of the processes described in the previous two sections is usually sufficient for the construction of model atoms for the analysis of stellar photospheres .nonetheless , two other types of radiative and collisional processes may be of relevance in some cases .they are only briefly described in the following for completeness , leaving it to the reader to investigate the specialist literature ._ autoionization _ can occur in complex atoms with several electrons .when two electrons are excited simultaneously , this can give rise to states below and above the ionization potential .the states above the ionization threshold may autoionize to the ground state of the ion , releasing one electron .the inverse process is also possible , and , if a stabilising radiative decay occurs within the ( short ) lifetime of the doubly excited state and [ omega ] . ], it can give rise to an efficient recombination mechanism .this is referred to as _dielectronic recombination_. details of rate coefficient modelling can be found e.g. in badnell et al .( ) .an application in the context of wr - type central stars of planetary nebulae is discussed by de marco et al .( ) ._ charge exchange _ reactions are collisional processes between atoms / ions in which one , or more , electrons are transferred from one collision partner to the other , e.g. a , with b usually being h or he .one well - known reaction occurring in stellar atmospheres is o , which can dominate the ionization balance of oxygen as the non - lte departures of the (hi)/(hii ) are forced upon (oi)/(oii ) by this resonant reaction ( the ionization potentials of neutral hydrogen and oxygen are very similar ) , see e.g. baschek et al .( ) or przybilla et al .( ) for a discussion . a list of reaction playing a possible role in astrophysical plasmas and tabulated reaction ratesare provided e.g. by arnaud & rothenflug ( ) .the question whether a model atom is realistic can only be answered by comparison with observation .one needs to test whether the model atom is comprehensive enough , i.e. whether the level structure and all relevant connecting transitions are considered properly and whether the atomic data used are sufficiently accurate .usually , this should give rise to an iterative process : a stepwise improvement of the model atom by empirical selection of the ` best ' input data . the aim is to single out _one _ model atom , that reproduces the observed spectra closely _ at once _, independent of the plasma conditions ( atomic properties are independent of environment ) . in order to give an idea on the practical approach for performing such tests we discuss an example .synthetic line profiles from calculations with two different model atoms are compared in the left panel of fig .[ photoeffects ] .while the strong cii transition is highly sensitive to non - lte effects in particular to the photoionization cross - sections adopted , the other line is virtually insensitive to any model atom realisation using reasonable atomic input data ( it is ` in lte ' ) .such a sensitivity is one of the keys to select the ` right ' photoionization data for the model atom construction .the second ingredient in this process is the comparison with observations , here for stars in a temperature sequence ( right panel of fig .[ photoeffects ] ) .this is in order to test the reaction of the model realisations to a hardening radiation field .the line ` in lte ' serves as reference and the goal of the model atom optimisation is to minimise the differences in the derived abundances from the various indicators . in this caseit was shown that ill - chosen atomic data can result in line abundance differing by up to 0.8dex ( np08 ) , which helped to solve one notorious non - lte problem of stellar astrophysics ( nieva & przybilla ) .ideally , the observations used for the model atom calibration should range from the far - uv to the near - ir in order to test as many channels ( spectral lines ) as possible , even those that may be irrelevant for practical applications later . at the same time, the observations should cover an as wide range of plasma parameters as possible : high - density , i.e. collisionally dominated , environments like the photospheres of dwarf stars and low - density plasmas ( those dominated by radiative processes ) as encountered in ( super-)giants should be considered , at different temperatures . where possible , also the metallicity dependence of non - lte effects should be investigated to test the response of a model atom to a varying radiation field , e.g. by considering stars of population i and ii .further tests may include more ` exotic ' environments , like he - dominated plasmas in compact subdwarfs ( przybilla et al . ) or giant extreme helium stars ( przybilla et al . , ) .such a comprehensive approach involving satellite observations may not be feasible in almost all cases . however , high - quality spectroscopy from the ground with modern echelle spectrographs is often sufficient to provide the means to facilitate proper model atom testing .figure [ testing ] visualises the channels available for testing a model atom of cii using the optical spectra of early b - type stars .note that despite this comprises a fair number of energy levels and transitions from several multiplets , only a small fraction of the entire model atom can be really scrutinised by this .consideration of lines from additional ionization stages ( ciii and civ in that case , np08 ) may put further constraints , as the full set of observed lines should be reproduced closely by the model simultaneously .the above example is typical for elements with relatively simple electron structure .more complex electron configurations like the open 3-shells in the iron - group members pose a larger challenge at first glance , but the enormous number of observable transitions puts many constraints on the model atom construction as well . the real challenge are therefore the low - abundance light elements lithium , beryllium and boron , and in particular their alkali - like ions . there, typically only the resonance lines are observable , which gives only marginal constraints for tests of the model atom .where possible , resonance lines from another ionization stage should therefore be investigated , or subordinate lines that may become observable in stars with particular high abundance in the element under study .it is of utmost importance to use realistic atmospheric structures for testing model atoms , requiring a proper determination of the stellar atmospheric parameters , see nieva & przybilla ( this volume ) .well - studied standard stars like the sun ( g2v ) , procyon ( f5iv - v ) , vega ( a0v ) , ( b0.2v ) or arcturus ( k1.5iii ) with tight constraints on their atmospheric parameters are therefore primary objects for the comparison of the models with observation . however, further stars that bracket the extremes of the parameter space to be studied have also to be considered .the plethora of possibilities implies that the model atom construction does not result in a unique solution .a good reproduction of observations may be achieved by a whole family of models .the main insight is that there are many insufficient model atoms , but few adequate ones . in consequence ,non - lte analyses are not superior to lte investigations _ per se _ , but require robust and comprehensive model atoms . in view of the increasing availability of accurate and precise atomic data from _ab - initio _ calculations and experiments one is faced by a perpetual challenge : the impact of new high - quality atomic data should be tested on the modelling whenever such become available . of course , the same is true for new observations that may facilitate the predictive power of the models to be tested further by opening up other channels for the model atom calibration .99 allen , c. 1973 , _ `` astrophysical quantities '' _ , 3rd edn .( athlone press : london ) anderson , l. s. 1989 , apj , 339 , 558 arnaud , m. & rothenflug , r. 1985 , a&as , 60 , 425 auer , l. & mihalas , d. 1973 , apj , 184 , 151 badnell , n. r. , omullane , m. g. , summers , h. p. , et al . 2003 , a&a , 406 , 1151 barklem , p. s. 2007 , a&a , 466 , 327 baschek , b. , scholz , m. & sedlmayr , e. 1977 , a&a , 55 , 375 belyaev , a. k. , grosser , j. , hahne , j. & menzel , t. 1999 , phys .a , 60 , 2151 belyaev , a. k. & barklem , p. s. 2003 , phys .a , 68 , 062703 belyaev , a. k. , barklem , p. s. , dickinson , a. s. & gada , f. x. 2010 , phys .rev . a , 81 , 032706 bray , i. , fursa , d. v. , kheifets , a. s. & stelbovics , a. t. 2002 , j. phys .b , 35 , r117 de marco , o. , storey , p. j. & barlow , m. j. 1998 , mnras , 297 , 999 drawin , h. w. 1961 , z. physik , 164 , 513 drawin , h. w. 1968 , z. physik , 211 , 404 drawin , h. w. 1969 , z. physik , 225 , 483 fabbian , d. , asplund , m. , barklem , p. s. , et al . 2009 , a&a , 500 , 1221 fleck , i. , grosser , j. , schnecke , a. , et al .1991 , j. phys .b , 24 , 4017 froese fischer , c. & tachiev , g. 2004 , at .data nucl .data tables , 87 , 1 hummer d. g. , berrington k. a. , eissner w. , et al .1993 , a&a , 279 , 298 kjeldsen , h. 2006 , j. phys .b , 39 , r325 kurucz , r. l. 2006 , eas pub . ser . , 18 , 129 mashonkina , l. j. 1996 , in _`` model atmospheres and spectrum synthesis '' _ , ed .s. j. adelman , f. kupka & w. w. weiss ( asp : san francisco ) , p. 140 mihalas , d. 1972 , apj , 177 , 115 mihalas , d. 1978 , _`` stellar atmospheres '' _ , 2nd ed .freeman & co : san francisco ) nahar , s. & pradhan , a. 1997 , apjs , 111 , 339 nahar , s. 2002 , at .data nucl .data tables , 80 , 205 nieva , m. f. & przybilla , n. 2006 , apj , 639 , l39 nieva , m. f. & przybilla , n. 2008 , a&a , 481 , 199 ( np08 ) przybilla , n. & butler , k. 2004 , apj , 609 , 1181 przybilla , n. , butler , k. , becker , s. r. , et al .2000 , a&a , 359 , 1085 przybilla , n. , butler , k. , becker , s. r. & kudritzki , r. p. 2001, a&a , 369 , 1009 przybilla , n. , butler , k. , heber , u. & jeffery , c. s. 2005 , a&a , 443 , l25 przybilla , n. , nieva , m. f. & edelmann , h. 2006a , balt .astron . , 15 , 107 przybilla , n. , nieva , m. f. , heber , u. & jeffery , c. s. 2006b , balt .15 , 163 seaton , m. j. 1962 , in _ `` atomic and molecular processes '' _ ( academic press : new york ) seaton , m. j. 1987 , j. phys .b , 20 , 6363 seaton , m. j. , yu , y. , mihalas , d. & pradhan a. k. 1994 , mnras , 266 , 805 sigut , t. a. a. 1996 , apj , 473 , 452 steenbock , w. & holweger , h. 1984 , a&a , 130 , 319 takeda , y. 1991 , a&a , 242 , 455 van regemorter , h. 1962 , apj , 136 , 906 williams , i. d. 1999 , rep .phys . , 62 , 1431 wilson , n. j. , bell , k. l. & hudson , c. e. 2005 , a&a , 432 , 731 yan , y. & seaton , m. j. 1987 , j. phys .b , 20 , 6409 yan , y. , taylor , k. t. & seaton , m. j. 1987 , j. phys .b , 20 , 6399
|
model atoms are an integral part in the solution of non - lte problems . they comprise the atomic input data that are used to specify the statistical equilibrium equations and the opacities and emissivities of radiative transfer . a realistic implementation of the structure and the processes governing the quantum - mechanical system of an atom is decisive for the successful modelling of observed spectra . we provide guidelines and suggestions for the construction of robust and comprehensive model atoms as required in non - lte line - formation computations for stellar atmospheres . emphasis is given on the use of standard stars for testing model atoms under a wide range of plasma conditions .
|
the recent environmental health research has revealed that the associations between air pollution exposures and health outcomes vary spatially because the local environmental factors including topography , climate , and air pollutant constituents are heterogeneous across a broad area ( ; ; ). however , the current available region definition ( e.g. the 48 states of the continental u.s . ) can not ensure homogeneous intra - region exposure - health - outcome association ( eha ) .therefore , it is desirable to identify a set of disjoint regions exhibiting similar within - region ehas and distinct between - region ehas in a data - driven fashion ( automatic rather than predefined region map ) .the statistical inferences based on the automatic detected regions ( e.g. eha regression analysis ) can provide important guidance for environmental health research .motivated by analyzing a data set from the national data base of annual air pollution exposures and cardiovascular disease mortality rates for 3100 counties in the u.s . during the 2000s ( more details are provided in section 4 ) , we aim to estimate the spatially varying associations between air pollution exposures and health outcomes across different regions .the region level inferences may reveal local environmental changes and latent confounders that could influence local population health .however , most current spatial statistical models ( for areal data analysis ) are limited for this purpose because few of them allow data - driven allocation of counties into contiguous regions ( that different regions demonstrate distinct health risks ) . in our analysis , we use county as the basic spatial unit because the health outcome data is aggregated at the county level , yet the proposed approach could be applied for analysis with any basic spatial unit ( e.g. zip code ) .the most popular modeling strategy for areal spatial data has been through the conditionally autoregressive ( car ) distribution and its variants.( ; ; ; ; ; ) . in disease mapping, a random effect model is often employed to link the disease rate with exposures , and the random residuals and random slopes are used to account for spatial dependence via car or multivariate car ( mcar ) priors ( ; ; ; ) .although the random residuals and slopes could improve goodness - of - fit by explaining more proportion of variance , the inferences on the regression coefficients ( main effects ) are still limited to one value for the whole nation( ) .the spatial dependency information is incorporated into statistical modeling by using a spatial adjacency matrix where if and are adjacent neighbors and otherwise .it has been recognized that the spatial adjacency may not necessarily lead to similar random effects because health outcomes related factors such as landscape types ( e.g. urban area and forest ) or sociodemographic factors ( e.g. income levels ) could be distinct between neighbors . propose a bayesian hierarchical modeling framework to detect county boundaries by using boundary likelihood value ( blv ) based on the posterior distributions , yet the detected boundary segments are often disconnected ( ) . norm car prior has also been applied to account for abrupt changes between neighbors ( ; ) . further include the random effect for the boundary edges with ising priors and identify more connected boundary segments .although the boundaries can provide information of adjacent but distinct neighbors , they may not allow to define distinct regions due to the discontinuity of boundary edges .it is also worth to note that the boundaries are often defined based on the outcomes ( or residuals ) rather than the disease - exposure associations ( regression coefficients ) . to fill this gap , we present an automatic region - wise spatially varying coefficient method to recognize and estimate the spatially varying associations between air pollution exposures and health outcomes in automatically detected regions for environmental health data , and we name it as region - wise automatic regression ( rar ) model . rather than focusing on modeling the spatial random effects, the rar model aims to parcellate the whole spatial space into a set of disjoint regions with distinct associations , and then to estimate the association for each disjoint region .we implement the rar model in three steps : first , we assess the initial difference of associations by examining each spatial unit s impact on the overall regression coefficient based on ; second , based on the initial difference of the associations , we cluster the all spatial units into a set of spatially contiguous regions by using image segmentation technique ; last , we estimate the associations for each region , and we account for the within region spatial correlation of the residuals .we also develop a likelihood based optimization strategy for parameter selection .our main contribution is first to provide a statistical model to identify the data - driven definition of regions exhibiting differential regression coefficients that may yield significant public health impact .the rest of this paper is organized as follows . in section 2, we describe rar model and its three step estimation procedure . in section 3 ,we conduct simulation studies with the known truth to examine the performance of the rar model and to compare it with existing methods . in section 4 , we apply the proposed method to a environmental health dataset on investigating the association between pm 2.5 and cardiovascular disease mortality rate in the u.s .. we conclude the paper with discussions in section 5 .we use a graph model to denote the spatial data , where the vertex set represents all spatial units ( e.g. counties ) in the space / map and the edge set delineates the similarity between the vertices . for rar model , reflects the similarity between the eha of spatial units and ( and ) . in the following subsections ,we introduce the three steps of rar model : 1 ) to assess the spatial association affinity between spatial units ( ) ; 2 ) to parcellate graph into regions that and , and the spatial units within one region exhibit coherent coefficients ; and 3 ) to estimate the region - specific eha . for notational simplicity, we only consider cross - sectional study modeling though it is straightforward to extend the rar method to longitudinal studies . to assess the association between health outcomes and covariates ,a poisson regression model is often used : where is death count and is age - adjusted expected population count for county , and are covariates of potential confounders besides the air pollution exposure .if the association between air pollution exposure and health outcome at the location deviates from the general trend , then the regression coefficient excluding location will be distinct from the general regression coefficient of all observations .therefore , we adopt dfbetas to measure the eha deviation for location .we denote as the deviation of spatial unit : where is the design matrix including both pm2.5 and , is weight matrix ( is the identity matrix if no weight is assigned ) , is the leverage , is the standardized pearson residual , is the estimated standard deviation of ( ) .note that [ eq : dfbeta ] is a one - step approximation to the difference for a generalized linear model ( ) .then , the similarity of the associations between two spatially adjacent units and are calculated by a distance metric ( e.g. gaussian similarity function ) : where is the standard deviation across all , and is the indicator function which equals 1 only when and are spatially adjacent .in addition , if the natural cubic splines are used to fit the nonlinear trend of the air pollution exposure ( ) , then .thus , the range of is from 0 to 1 .the similarity matrix is a matrix , which is equal to ( and is hadamard product ) .thus , matrix fuses information of pm2.5 exposure ( along with other covariates ) and health outcomes in with spatial adjacency information .the goal of step two is to identify the disjoint and contiguous regions ( ) such that the eha of the spatial units are homogeneous within each but distinct between and ( ) .the contiguity requires that vertex in , there exists at least one vertex which is connected to .then the region detection becomes a graph segmentation problem based on the similarity matrix .we aim to estimate a set of binary segmentation binary parameters and that parcellate into .therefore , the segmentation model aims to estimate the latent binary parameters : with the contiguity constraint , where is the neighborhood of edge .one way to circumvent this is to identify the for those in ( which is analogous to cutting " edges in the graph ) with the optimization that : where is the number of disjoint sets and is the sum of a set of edge weights ( with and ) to isolate the subgraph from .however , even for a planary graph g and parcellation , the optimization is np complex .to solve the objective function in [ eq : cut ] , a two step relaxation procedure is often used .the first step is a continuous relaxation : where l is the normalized graph laplacian matrix with ( ) , and is a coordinate matrix ( all entries are continuous ) that places the close nodes ( based on the adjacency matrix ) near to each other in , and ( , ) is the vector ( ) . by rayleigh quotient , are the first eigen - vectors of spectral decomposition of ( with ascending order of eigen - values ) ( ) .in addition , the unnormalized graph laplacian matrix is also often used ( ) . from the spectral graph theory point of view, the bayesian car model updates the random effect ( ) posterior sampling by using the heuristic to increase the fusion of likelihood and the objective function in [ eq : relaxation1 ] which is equivalent to minimizing unnormalized graph laplacian matrix ( ) .therefore , the car prior can be considered as a penalty ( restriction ) term that aligns with the spectral clustering objective function .the second step of the automatic region detection procedure is discretization relaxation that produces which is a binary matrix with all entries either 0 or 1 and .then , objective function becomes which is equivalent to [ eq : cut ] .the second step optimization aims to calibrates the coordinate matrix with reference to . to implement this optimization step , and applyk - means clustering algorithms for the distretization relaxation .however , the k - means clustering algorithm results may vary due to different random initialization and yield unstable results . multiclass spectral clustering algorithm which is robust to random initialization and nearly global - optimal .the optimization yields results of the automatic region detection , and based on all spatial units are categorized to class with contiguity .in addition , can be obtained from the resulting , and under mild regularity condition the spectral clustering based region segmentation estimator is consistent that is ( , ) .we briefly summarize the algorithm in appendix and refer the readers to the original paper for the detailed optimization algorithm . as a comparison, the popular bayesian car model leverages the second line of formula [ eq : reg ] and imposes areal random intercept linked with the spatial adjacency matrix by letting . is a positive scale parameter , is the spatial adjacency matrix introduced in section 1 , ( ) , and is chosen to ensure non - singular .note that the distribution of is a proper car distribution when .when implementing the mcmc for a bayesian car model , the chain updating criteria incorporate both the likelihood part and the car prior .the parameter update rule in part favors smaller values , i.e. the spatially adjacent and have similar values .interestingly , the prior function is intrinsically linked with the objective function of spectral clustering algorithm which aims to minimize , where is unnormalized graph laplacian matrix when and is discretized ( n nodes and k classes ) matrix to represent the cluster membership that and ( ) . using our similarity matrix as , is smaller when the spatially adjacent units with similar dfbetas are classified into one spatial cluster . with similar objective functions , could be obtained by discretizing ( ) .hence , both the car and rar model incorporate spatial adjacency in a close format of the updating criteria and objective function . yet , as the random effects ( residuals or the random slopes ) in the car model are continuous , they could not provide region parcellation information to identify regions with distinct eha as the rar model does . provided with the automatic region parcellation results , we estimate the region / subgraph specific association between air - pollution exposure and health outcomes . the straightforward method is to conduct stratified analysis such that within each , the glmis estimated : within each , we further investigate the spatial dependence of the residuals by using semivariogram or moran s i statistic .if the residuals of the spatially close units are dependent with each other , then the spatial autoregressive models such as car and sar can be applied for regression analysis .alternatively , a glm could be applied to fit all spatial units by using the region indicator as a categorical covariate and adding the interaction terms of the categorical region indicators and the air pollution exposure . in light of the law of parsimony, we aim to maximize the likelihood with constraint of model complexity and apply the commonly used model selection criteria bic to determine the appropriate number of .particularly , the bic value is a function of number of regions ( ) , and lower bic implies appropriate number of regions .we conduct a set of simulations to demonstrate the performance of rar and compare it with conventional spatial statistical methods including sar and car models .we first generate a map of spatial units from three distinct regions by letting and with .we assume that there are three distinct associations for the three different regions .then , we simulate the covariates and residuals , for example , we let .we further apply gaussian kernel to smooth the residuals to reflect the spatial dependency .then , the dependent variable follows .the input data are observations of for and the region parcellation is unknown .we illustrate the data simulation and model fitting procedure in figure [ fig : rs ] .we simulate 100 data sets at three of the noise levels ( , , and ) .we apply the rar method to analyze the simulated data sets and compare it to the car model and sar model .we evaluate the performance of different methods by using the criteria of the bias and 95% ci coverage of across the 100 simulated data sets , and the results are summarized in table 1 .we note that though some smoothing effects are observed at the boundaries , the transitions are fairly well recaptured .we do not include the bayesian car spatially varying coefficient model ( e.g. ) because it yields different regression coefficient for each location , and it is not available for the comparison of region level .the results show that without region parcellation to account for the spatially varying coefficients , the eha estimation could be vastly biased by using conventional spatial data analysis methods .in addition , rar model seems not to be affected by the noise levels .the rar method can effectively and reliably detect the spatially varying regions and yield robust and close estimates of true .lccccccc & & & + parameters & mean ( sd ) & ci coverage & mean ( sd ) & ci coverage & mean ( sd ) & ci coverage + & & & & & + ( 40 ) & 39.00(3.22 ) & 98% & 4.19 ( 0.35 ) & 4.1% & 3.35 ( 0.28 ) & 4.9% + ( -30 ) & -28.68 ( 1.93 ) & 99% & - & 6.3% & - & 5.6% + ( 10 ) & 9.78 ( 0.35 ) & 99% & - & 22.3% & - & 19.3% + & & & & & + ( 40 ) & 39.13(2.89 ) & 99% & 2.07 ( 1.12 ) & 1.5% & 2.23 ( 1.54 ) & 1.3% + ( -30 ) & -29.14 ( 1.02 ) & 99% & - & 4.3% & - & 2.7% + ( 10 ) & 8.88 ( 0.35 ) & 98% & - & 8.3% & - & 5.9% + & & & & & + ( 40 ) & 39.27(3.14 ) & 99% & 1.98 ( 0.87 ) & 0.9% & 0.8 ( 0.93 ) & 0.4% + ( -30 ) & -29.32 ( 1.87 ) & 99% & - & 0.2% & - & 0.3% + ( 10 ) & 8.77 ( 0.52 ) & 99% & - & 2.5% & - & 2.1% +the 2010 annual circulatory mortality with icd10 code i00-i99 and annual ambient fine particular matter ( pm2.5 ) for 3109 counties in continental u.s was downloaded from cdc wonder web portal .the annual mortality rate ( per 100,000 ) was age - adjusted and the reference population was 2000 u.s population .the annual pm2.5 measurement was the average of daily pm2.5 based on 10 km grid which were aggregated for each county .the measurement of pm2.5 in 10 km grid was from us environmental protection agency ( epa ) air quality system ( aqs ) pm2.5 in - situ data and national aeronautics and space administration ( nasa ) moderate resolution imaging spectroradiometer ( modis ) aerosol optical depth remotely sensed data . in this dataset, a threshold of 65 micrograms per cubic meter was set to ( left ) truncate the data to avoid invalid interpolation of grid pm2.5 .we use the annual data set for 2001 , because the population size and demographic information is more accurate by using 2000 u.s . census data . in figure [ x ], we illustrate the maps for annual age adjusted cardiovascular disease rate and air pollution ( pm 2.5 ) level .[ x ] we perform the rar analysis on this data set by following the three steps described in section 2 .we select the number of regions which minimizes bic .we then incorporate the identified region labels as categorical covariates as well as the interaction between region labels and pm2.5 exposure levels .we examine the effects of detected regions ( by introducing 15 dummy variables and 15 interaction variables ) by using likelihood ratio test , both main effects ( region ) and interaction terms are significant with .figure [ beta2001 ] demonstrates the automatically detected regions and spatially varying associations between pm 2.5 and mortality rate at different regions ( secondary parameters ) .the results reveal that the eha are not coherent across the counties in the nation and rar defines regions by breaking and rejoining the counties in different states based on similarity of eha .the rar defined map may also reveal potential confounders that affect health risk assessment .we note that the most significant positive eha resides in northwest region and florida : although the air pollution levels are not among the highest , the disease and exposure are highly positively associated .there are regions exhibiting negative eha , for instance , the southeast region ( part of ga , sc , and nc ) and we further verify the association on the exposure and health outcome in enlarged figure and interestingly by visual checking the exposure and health outcome are negatively associated .there could be potential co - founders such as medical facility accessibility and dietary behavior difference , etc .our results reveal that region - wise eha may vary across the nation ( affected by local factors ) , which in part addresses the ecological fallacy ( simpson s paradox ) .the rar method could be used as a tool to raise further research questions and to motivate new public health research investigation of the variation .[ beta2001 ] [ zoom ]the mapping of disease and further building environment - health / disease association have long been a key aspect of public health research .however there has been challenge for statistical data analysis to yield spatially varying eha at region level .most previous methods employ random effect model by letting each spatial unit have a random slope and borrow power from neighbors , which is advantageous with regard to improving model fitting and model variance explanation .but , it is also beneficial to provide a map of regions that is defined by eha similarity because it would allow us to directly draw statistical inferences at the region level as main effects may vary spatially(which is our major motivation to develop the rar method ) .rooted from image parcellation , the rar framework aims to parcellate a large map into several contiguous regions with a two - fold goal : i ) to define data - driven regions that reveals spatially varying eha at region level ; and ii ) to utilize the regions to fit a better regression model . based on our simulation and example data analysis results, it seems that the performance of the rar method is satisfactory .a car or sar model could be applied on top of the rar method , but the rar could uniquely provide region - wise inferences .the region - wise eha inferences provide important guidance for public health decision making .for instance , although the exposure levels at california and ohio river valley are high , the eha of these two regions are not among the highest associations .thus , the rar model could provide the informative geographic parcellations and inferences that neither exposure or health status data alone can exhibit ( e.g figure 2 ) .clustering and cluster analysis have been widely applied in spatial data analysis ( ). however , the most methods are limited to provide disjoint and contiguous regions with distinct disease exposure associations .the regions with different associations may suggest some unidentified confounders and other public health or geographic / environmental factors affecting population health .although we demonstrate our model for cross sectional analysis , a longitudinal model could be extended straightforwardly because the dfbetas could be computed for gee or mixed effect model as well .the computational load of the rar model is negligible and all of our simulation and data example calculation time is within a minute by using a pc with i7 cpu and 24 g memory .chen s research is supported in part by umd tier1a seed grant .* algorithm * : given an adjacency matrix for each subject and the number of classes , 1 .compute the degree matrix , where .2 . find the eigen - solutions ] , where operation extracts the diagonal elements of matrix as a vector ; and creates a matrix with diagonal elements equal to and off - diagonal being zeros .4 . set the convergence criterion parameter , and initialize a matrix by the following steps : denote by the column of for .set ^{\mathrm t} ] where 5 .minimize the objective function : .where stands for frobenius norm ; and with the term is the centroid of which minimizes with respect to .+ then , where .conduct singular value decomposition on the matrix ,else , update .go to step 5 .banerjee , s. , gelfand , a. e. , finley , a. o. , sang , h. ( 2008 ) .gaussian predictive process models for large spatial data sets ._ journal of the royal statistical society : series b ( statistical methodology ) _ , 70(4 ) , 825 - 848 .bell , m. l. , ebisu , k. , leaderer , b. p. , gent , j. f. , lee , h. j. , koutrakis , p. , ... peng , r. d. ( 2014 ) .associations of pm2 .5 constituents and sources with hospital admissions : analysis of four counties in connecticut and massachusetts ( usa ) for persons 65 years of age ._ environmental health perspectives _ , 122(2 ) , 138 .best , n. , arnold , r.a ., thomas , a. , waller , l.a . , and conlon , e.m .( 1999 ) bayesian models for spatially correlated disease and exposure data . in _bayesian statistics _ 6 , j.m .bernardo , j.o .berger , a.p . , dawid , and a.f.m .smith ( eds . ) .oxford : oxford university press .chung , y. , dominici , f. , wang , y. , coull , b. a. , bell , m. l. ( 2015 ) .associations between long - term exposure to chemical constituents of fine particulate matter ( pm2 .5 ) and mortality in medicare enrollees in the eastern united states . _ environmental health perspectives _ , 123(5 ) , 467 .dominici , f. , daniels , m. , zeger , s. l. , samet , j. m. ( 2002 ) . air pollution and mortality : estimating regional and national dose - response relationships ._ journal of the american statistical association _ , 97(457 ) , 100 - 111 .garcia , c. a. , yap , p. s. , park , h. y. , weller , b. l. ( 2015 ) .association of long - term pm2 .5 exposure with mortality using different air pollution exposure models : impacts in rural and urban california ._ international journal of environmental health research _ , ( ahead - of - print ) , 1 - 13 .waller , l. a. , goodwin , b. j. , wilson , m. l. , ostfeld , r. s. , marshall , s. l. , hayes , e. b. ( 2007 ) .spatio - temporal patterns in county - level incidence and reporting of lyme disease in the northeastern united states , 19902000 . _ environmental and ecological statistics _ , 14(1 ) , 83 - 100 .wheeler , d. c. and waller , l. a. ( 2008 ) .mountains , valleys , and rivers : the transmission of raccoon rabies over a heterogeneous landscape .journal of agricultural , biological , and environmental statistics , 13(4 ) , 388 - 406 .
|
motivated by analyzing a national data base of annual air pollution and cardiovascular disease mortality rate for 3100 counties in the u.s . ( areal data ) , we develop a novel statistical framework to automatically detect spatially varying region - wise associations between air pollution exposures and health outcomes . the automatic region - wise spatially varying coefficient model consists three parts : we first compute the similarity matrix between the exposure - health outcome associations of all spatial units , then segment the whole map into a set of disjoint regions based on the adjacency matrix with constraints that all spatial units within a region are contiguous and have similar association , and lastly estimate the region specific associations between exposure and health outcome . we implement the framework by using regression and spectral graph techniques . we develop goodness of fit criteria for model assessment and model selection . the simulation study confirms the satisfactory performance of our model . we further employ our method to investigate the association between airborne particulate matter smaller than 2.5 microns ( pm 2.5 ) and cardiovascular disease mortality . the results successfully identify regions with distinct associations between the mortality rate and pm 2.5 that may provide insightful guidance for environmental health research . _ keywords _ : air pollution , areal data , environmental health , segmentation , spatial statistics , spatially varying associations .
|
wireless operators are in the process of augmenting the macrocell network with supplemental infrastructure such as microcells , distributed antennas and relays .an alternative with lower upfront costs is to improve indoor coverage and capacity using the concept of _ end - consumer _ installed femtocells or home base stations .a femtocell is a low power , short range ( meters ) wireless data access point ( ap ) that provides in - building coverage to home users and transports the user traffic over the internet - based ip backhaul such as cable modem or dsl . femtocell users experience superior indoor reception and can lower their transmit power .consequently , femtocells provide higher spatial reuse and cause less interference to other users . due to cross - tier interference in a two - tier network with shared spectrum , the target per - tier sinrs among macrocell and femtocellusers are coupled .the notion of a sinr target " models a certain application dependent minimum quality of service ( qos ) requirement per user .it is reasonable to expect that femtocell users and cellular users seek different sinrs ( data rates ) typically higher data rates using femtocells because home users deploy femtocells in their self interest , and because of the proximity to their bs .however , the qos improvement arising from femtocells should come at an expense of reduced cellular coverage .contemporary wireless systems employ power control to assist users experiencing poor channels and to limit interference caused to neighboring cells . in a two - tier network however , cross - tier interference may significantly hinder the performance of conventional power control schemes .for example , signal strength based power control ( channel inversion ) employed by cellular users results in unacceptable deterioration of femtocell sinrs .the reason is because a user on its cell - edge transmits with higher power to meet its receive power target , and causes excessive cross - tier interference at nearby femtocells .interference management in two - tier networks faces practical challenges from the lack of coordination between the macrocell base - station ( bs ) and femtocell aps due to reasons of scalability , security and limited availability of backhaul bandwidth . from an infrastructure or spectrum availability perspective, it may be easier to operate the macrocell and femtocells in a common spectrum ; at the same time , pragmatic solutions are necessary to reduce cross - tier interference . an open access ( oa ) scheme , which performs radio management by vertical handoffs forcing cellular users to communicate with nearby femtocells to load balance traffic in each tier is one such solution .a drawback of oa is the network overhead and the need for sufficient backhaul capacity to avoid starving the paying home user .additionally , oa potentially compromises security and qos for home users .this work assumes _ closed access _( ca ) , which means only licensed home users within radio range can communicate with their own femtocell .with ca , cross - tier interference from interior femtocells may significantly deteriorate the sinr at the macrocell bs .the motivation behind this paper is ensuring that the service ( data rates ) provided to cellular users remain unaffected by a femtocell underlay which operates in the same spectrum .three main reasons are the macrocell s primary role of an anytime anywhere infrastructure , especially for mobile and isolated " users without hotspot access , the greater number of users served by each macrocell bs , and the end user deployment of femtocells in their self - interest .the macrocell is consequently modeled as primary infrastructure , meaning that the operator s foremost obligation is to ensure that an outdoor cellular user achieves its minimum sinr target at its bs , despite cross - tier femtocell interference .indoor users act in their self interest to maximize their sinrs , but incur a sinr penalty because they cause cross - tier interference . considering a macrocell bs with cochannel femtocells and one transmitting user per slot per cell over the uplink ,the following questions are addressed in this paper : * given a set of feasible target sinrs inside femtocell hotspots , what is the largest cellular sinr target for which a non - negative power allocation exists for all users in the system ? * how does the cellular sinr depend on the locations of macrocell and femtocell users and cellular parameters such as the channel gains between cellular users and femtocells ? * given an utility - based femtocell sinr adaptation with a certain minimum qos requirement at each femtocell , what are the ensuing sinr equilibria and can they be achieved in a distributed fashion ? * when a cellular user can not satisfy its sinr target due to cross - tier interference , by how much should femtocells reduce their sinr target to ensure that the cellular user s sinr requirement is met ?although this work exclusively focuses on the uplink in a tiered cellular system , we would like to clarify that portions of our analysis ( section iii ) are also applicable in the downlink with potentially different conclusions .due to space limitations , the downlink extension is omitted for future work .prior research in cellular power control and rate assignments in tiered networks mainly considered an operator planned underlay of a macrocell with single / multiple microcells . in the context of this paper , a microcell has a much larger radio range ( 100 - 500 m ) than a femtocell , and generally implies centralized deployment , i.e. by the service - provider .a microcell underlay allows the operator to handoff and load balance users between each tier .for example , the operator can preferentially assign high data rate users to a microcell because of its inherently larger capacity .in contrast , femtocells are consumer installed and the traffic requirements at femtocells are user determined without any operator influence . consequently , distributed interference management strategies may be preferred. our work ties in with well known power control schemes in conventional cellular networks and prior work on utility optimization based on game theory .results in foschini __ et al.__ , zander , grandhi _ _ et al.__ and bambos _ _ et al.__ provide conditions for sinr feasibility and/or sir balancing in cellular systems .specifically , in a network with users with target sinrs , a feasible power allocation for all users exists iff the spectral radius of the normalized channel gain matrix is less than unity . associated results on centralized / distributed / constrained power control , link admission control and user - bs assignment are presented in and numerous other works .the utility - based non - cooperative femtocell sinr adaptation presented here is related to existing game theory literature on non - cooperative cellular power control ( see for a survey ) .the adaptation forces stronger femtocell interferers to obtain their sinr equilibria closer to their minimum sinr targets , while femtocells causing smaller cross - tier interference obtain higher sinr margins .this is similar to xiao and shroff s utility - based power control ( ubpc ) scheme , wherein users vary their target sirs based on the prevailing traffic conditions . unlike the sigmoidal utility in , our utility function has a more meaningful interpretation because it models the femtocell user s inclination to seek higher data - rates and the primary role of the macrocell while penalizing the femtocell user for causing cross - tier interference .our sinr equilibria is simple to characterize unlike the feasibility conditions presented in prior works e.g . to minimize cross - tier interference ,prior femtocell research has proposed open access , varying femtocell coverage area , hybrid frequency assignments , adjusting the maximum transmit power of femtocell users and adaptive access operation of femtocells .in contrast , this paper addresses sinr adaptation and ensuring acceptable cellular performance in closed access femtocells .related works in cognitive radio ( cr ) literature such as propose that secondary users limit their transmission powers for reducing interference to primary users ( pus ) . in , cr users regulate their transmit powers to limit pu interference , but their work does not address individual rate requirements at each cr . qian _ _ et al.__ propose a joint power and admission control scheme , but provide little insight on how a cr user s data - rate is influenced by a pu s rate .in contrast , our results are applicable in cr networks for determining the _ exact relationship _ between the feasible sinrs of primary and cr users ; further our sinr adaptation can enable cr users to vary their data - rates in a decentralized manner based on instantaneous interference at pu receivers .near - far effects in a cochannel two - tier network are captured through a theoretical analysis providing the highest cellular sinr target for which a non - negative power allocation exists between all transmit - receive pairs given any set of femtocell sinrs and vice versa . with a common sinr target at femtocells and neglecting interference between femtocells , the per - tier pareto sinr pairs have an intuitive interpretation : the sum of the decibel ( db ) cellular sinr and the db femtocell sinr equals a constant .design interpretations are provided for different path loss exponents , different numbers of femtocells and varying locations of the cellular user and hotspots .femtocells individually maximize an objective function consisting of a sinr dependent reward , and a penalty proportional to the interference at the macrocell .we obtain a _ channel - dependant sinr equilibrium _ at each femtocell .the equilibrium discourages strongly interfering femtocells to use large transmit powers .this sinr equilibrium is attained using distributed power updates . for femtocell userswhose objective is to simply equal their minimum sinr targets , our adaptation simplifies to the foschini - miljanic ( fm ) update .numerical results show that the utility adaptation provides up to higher femtocell sinrs relative to fm .to alleviate cross - tier interference when the cellular user does not achieve its sinr target , we propose a distributed algorithm to progressively reduce sinr targets of strongest femtocell interferers until the cellular sinr target is met . numerical simulations with femtocells / cell - site show acceptable cellular coverage with a worst - case femtocell sinr reduction of only ( with typical cellular parameters ) .the system consists of a single central macrocell serving a region , providing a cellular coverage radius .the macrocell is underlaid with cochannel femtocells aps .femtocell users are located on the circumference of a disc of radius centered at their femtocell ap .orthogonal uplink signaling is assumed in each slot ( scheduled active user per cell during each signaling slot ) , where a slot may refer to a time or frequency resource ( the ensuing analysis leading up to theorem [ th : paretosinrcontour ] apply equally well over the downlink ) .[ as : cci ] for analytical tractability , cochannel interference from neighboring cellular transmissions is ignored . during a given slot ,let denote the scheduled user connected to its bs .designate user s transmit power to be watts .let be the variance of additive white gaussian noise ( awgn ) at .the received sinr of user at is given as here represents the minimum target sinr for user at .the term denotes the channel gain between user and bs .note that can also account for post - processing sinr gains arising from , but not restricted to , diversity reception or interference suppression ( e.g. cdma ) . in matrix - vector notation , can be written as here while the vector denotes the transmission powers of individual users , and the normalized noise vector equals .the matrix is assumed to be irreducible meaning its directed graph is strongly connected ( * ? ? ?* page 362 ) with elements given as since is nonnegative , the spectral radius ( defined as the maximum modulus eigenvalue ) is an eigenvalue of ( * ? ? ?* theorem 8.3.1 ) . applying perron - frobenius theory to , has a nonnegative solution ( or constitutes a _ feasible _ set of target sinr assignments ) _ iff _ the spectral radius is less than unity .consequently , the solution guarantees that the target sinr requirements are satisfied at all bss .further , is pareto efficient in the sense that any other solution satisfying needs at least as much power componentwise .when , then the max - min sir solution to is given as in an interference - limited system ( neglecting ) , the optimizing vector equals the perron - frobenius eigenvector of .in a two - tier network , let and denote the per - tier sinr targets at the macrocell and femtocell bss respectively . define and .any feasible sinr tuple ensures that the spectral radius with a feasible power assignment given by . this section derives the relationship between and as a function of and entries of the matrix .using the above notation , simplifies as here the principal submatrix consists of the normalized channel gains between each femtocell and its surrounding cochannel femtocells .the vector consists of the normalized cross - tier channel gains between the transmitting femtocell users to the macrocell bs .similarly , consists of the normalized cross - tier channel gains between the cellular user to surrounding femtocell bss .below , we list two simple but useful properties of : [ prop : p1 ] _ is a non - decreasing function of .that is , . [ prop : p2 ] _. property [ prop : p1 ] is a consequence of ( * ? ? ?* corollary 8.1.19 ) and implies that increasing the per - tier sinrs in drives closer to unity .this decreases the margin for existence of a nonnegative inverse of in . therefore , assuming a fixed set of femtocell sinrs given by , the maximum cellular sinr target monotonically increases with .property [ prop : p2 ] arises as a consequence of being a principal submatrix of , and applying ( * ? ? ?* corollary 8.1.20 ) .intuitively , any feasible femtocell sinr in a tiered network is also feasible when the network comprises only femtocells since . from, the condition is nonnegative with expansion given as .we restate a useful lemma by meyer for obtaining in terms of and . [le : meyer ] ( * ? ? ? * meyer ) _ let be a nonnegative irreducible matrix with spectral radius and let have a k - level partition in which all diagonal blocks are square . for a given index , let represent the principal block submatrix of by deleting the row and column of blocks from .let designate the row of blocks with removed .similarly , let designate the column of blocks with removed .then each perron complement is also a nonnegative matrix whose spectral radius is again given by . _ using lemma [ le : meyer ] , we state the first result in this paper . [th : paretosinrcontour ] _ assume a set of feasible femtocell sinrs targets such that , and a target spectral radius . the highest cellular sinr target maintaining a spectral radius of then given as ^{-1}\mathbf{\gamma}_f \mathbf{q}_f}.\end{aligned}\ ] ] _ from lemma [ le : meyer ] , the perron complement of the entry `` '' of in is a nonnegative scalar equaling .this implies , ^{-1}\mathbf{\gamma}_f \mathbf{q}_f.\end{aligned}\ ] ] rearranging terms , we obtain .note that since , the inverse ^{-1}=\sum_{k=0}^{\infty}(\gamma_f/\kappa)^{k}\mathbf{f}^k ] by the lower bound .intuitively , restates that is an upper bound on the product of the per - tier sinrs , achieved when in , i.e. the interference between neighboring femtocells is vanishingly small .ignoring is justifiable because the propagation between femtocells suffers at least a double wall partition losses ( from inside a femtocell to outdoor and from outdoor onto the neighboring femtocell ) , and there is only one partition loss term while considering the propagation loss between a cellular user to femtocells .thus , a simple relationship between the highest per - tier sinrs is expressed as : _ for small , the sum of the per - tier decibel sinrs equals a channel dependant constant ._ we denote this constant as the _ link budget_. choosing a cellular sinr target of db necessitates any feasible femtocell sinr target to be no more than db . to keep large ,it is desirable that the normalized interference powers are decorrelated ( or and do not peak simultaneously ) . in a certain sense, the link budget provides an efficiency index " of closed access femtocell operation , since open ( or public ) femtocell access potentially allows users to minimize their interference by handoffs .assume a path loss based model wherein the channel gains ( represents the distance between user to bs .the term is the path loss exponent ( assumed equal indoors and outdoors for convenience ) .femtocell user is located at distances from its ap and from .the cellular user is located at distances from its macrocell bs and from each femtocell ap ( see fig . [fig : example_linkbudget ] for femtocells ) . in this setup , .the vector .the decibel link budget varies with as a straight line and given as define as the _ interference distance product normalized by the signaling distance product_. then , monotonically increases with whenever the slope and decreases otherwise .consequently , the condition determines the sensitivity of link budgets to the path - loss exponent .this subsection studies how the per - tier sinrs and link budgets vary with user and femtocell locations in practical path loss scenarios .assume that the cellular user is located at a distance from the macrocell . at a distance from ( see fig .[ fig : twotier_femtocellnetwork ] ) , surrounding cochannel femtocells are arranged in a square grid e.g. residential neighborhood of area with femtocells per dimension .each femtocell has a radio range equaling meters .let denote the distance between transmitting mobile and bs . for simplicity ,neither rayleigh fading nor lognormal shadowing are modeled . assuming a reference distance for all users , the channel gains are represented using the simplified path loss model in the imt-2000 specification , given as in , respectively denote the cellular , indoor and indoor to outdoor femtocell path loss exponents . defining as the carrier frequency in mhz , db equals the fixed decibel propagation loss during cellular transmissions to .the term is the fixed loss between femtocell user to their bs .finally , denotes the fixed loss between femtocell user to a different bs , and assumed equal to .the term explicitly models partition loss during indoor - to - outdoor propagation ( see numerical values for all system parameters in table [ tbl : sysprms ] ) .[ as : asplexponent ] assume equal outdoor path loss exponents from a cellular user and a femtocell user to the macrocell .that is , .following as[as : asplexponent ] , substituting in and assuming that users are at least meter away from bss ( or ) , the link budget is given as fig .[ fig : sinrcontourssquaregrid ] shows the sinr contours using , considering a common femtocell sinr target and different normalized and values .the target spectral radius was chosen equal to ( ensuring that ) . for comparison ,the upper bound in was also plotted .three different positions normalized w.r.t of the cellular user and the femtocell grid are considered namely [ a ) ] , and and . in case ( a ) ,note that the macrocell bs is located in the _ interior _ of the femtocell grid .we observe that employing is a good approximation for the exact result given in .the highest per - tier sinrs occurs in configuration ( b ) suggesting a low level of normalized interference ( and ) .interestingly , when both users and hotspots are close to the macrocell bs [ configuration ( a ) ] , the per - tier sinrs are _ worse _ compared to the cell - edge configuration ( c ) .this counterintuitive result suggests that unlike a conventional cellular system where the regular placement of bss causes the worst - case sinrs typically at cell - edge , the _ asymmetric locations of interfering transmissions in a two - tier network potentially diminishes link budgets in the cell - interior as well . _the reason is because power control warfare " due to cross - tier interference from femtocells near the macrocell bs necessitates both tiers to lower their sinr targets .assuming in fig . [fig : twotier_femtocellnetwork ] , the following lemma provides a necessary condition under which the link budget in increases with ._ under assumption [ as : asplexponent ] and assuming fixed locations of all users w.r.t their bss , the link budget monotonically increases with whenever _ taking the first derivative of the link budget in with respect to yields .[ fig : lbsquaregrid ] plots the link budget in for and femtocells with the cellular user colocated at the grid center ( ) .the link budgets with are higher relative to those obtained when indicating link budgets tend to increase with higher path loss exponents in practical scenarios .[ fig : lbcdfrandom ] plots the cumulative distribution function ( cdf ) of considering randomly distributed femtocells inside a circular region of radius centered at distance from . with femtocells , both the regular and random configurations in figs .[ fig : lbsquaregrid]-[fig : lbcdfrandom ] show diminishing in the cell - interior suggesting significant levels of cross - tier interference .the above results motivate adapting femtocell sinrs with the following objectives namely to maximize their own sinrs , and limit their cross - tier interference .due to the absence of coordination between tiers , implementing centralized power control will likely be prohibitively difficult . in this section , we present a utility - based sinr adaptation scheme . using microeconomic concepts, we shall assume that cellular and femtocell users participate in a player non - cooperative power control game ] . given user ,designate as the vector of transmit powers of all users other than and define as the interference power experienced at .formally , for all users , this power control game is expressed as we are interested in computing the equilibrium point ( a vector of transmit powers ) wherein each user in individually maximizes its utility in , _ given _ the transmit powers of other users . such an equilibrium operating point(s ) in optimization problemis denoted as the _ nash equilibrium _ .denote as the transmission powers of all users under the nash equilibrium . at the nash equilibrium, no user can unilaterally improve its individual utility .mathematically , we shall make the following assumptions for the rest of the work .[ as : as4 ] all mobiles have a maximum transmission power constraint , consequently the strategy set for user is given as ] .we now employ the following theorem from glicksberg , rosen and debreu : [ th : existencene ] following theorem [ th : existencene ] , the optimization problems in and have a nash equilibrium .the following theorem derives the sinr equilibria at each femtocell .[ th : femtonasheqb ] _ a sinr nash equilibrium at femtocell bs satisfies , where is given as _ since femtocell user individually optimizes its utility as a best response to other users , we first fix interfering powers .because is a strictly concave function of , its partial derivative assuming differentiability monotonically decreases with increasing .a necessary condition for the existence of local optima is that the derivative of in the interval ] , the user chooses its equilibrium transmit power depending on the sign of the derivative transmit at full power ( if in ] , since , one may cancel on both sides of .the conditions - ensure that [ resp . are monotone decreasing [ resp. monotone non - decreasing ] in .the solution to corresponds to the intersection of a monotone decreasing function and a monotone increasing function w.r.t the transmitter power . given , this intersection is unique ( * ? ? ? * section 3 ) and corresponds to the nash equilibrium at . using the notation evaluated at yields .this completes the proof .assume the and in as shown below . the exponential reward intuitively models femtocell users desire for higher sinrs relative to their minimum sinr target .the linear cost discourages femtocell user from decreasing the cellular sinr by transmitting at high power . assuming , it can be verified that the above choice of and satisfies the conditions outlined in and . _ with the utility - based cellular sinr adaptation [ resp . femtocell sinr adaptation ] in [ resp . with reward - cost functions in ] , the unique sinr equilibria at bs are given as where is given as _ the cellular user s utility function is strictly concave w.r.t given . consequently , the argument maximizer in occurs either in the interior at or at the boundary point if in ] db . in any given trial , if the generated set of minimum sinr targets resulted in in , then our experiments scaled by a factor for ensuring feasible femtocell sinr targets .the first experiment obtains the improvements in femtocell sinrs relative to their minimum sinr targets with our proposed sinr adaptation .a cell - edge location of the cellular user ( ) and the femtocell grid ( ) is considered . to maximize the chance of obtaining a feasible set of sinrs , the cellular sinr target is equal to either its minimum target db , or scaling its highest obtainable target in by db ( which ever is larger ) and given as ^{-1}\mathbf{\gamma}_f \mathbf{q}_f } \right \rbrace.\end{aligned}\ ] ] assuming and in , fig . [fig : meanfemtosinr ] plots the mean decibel femtocell sinrs ( ) in for different and values . selecting models femtocell users seeking a greater sinr reward relative to their minimum sinr target . with and femtocells, there is a nearly % improvement in mean femtocell sinrs relative to their average minimum sinr target . with a higher interference penalty at femtocells ( ) ,our utility adaptation yields a nearly db improvement in mean femtocell sinrs above their mean sinr target .when , femtocell users have little inclination to exceed their minimum sinr targets .in fact , with femtocells , the mean equilibrium femtocell sinrs are _ below the mean sinr target _ because femtocell users turn down their transmit powers to improve the cellular link quality .the second experiment considers randomly selected decibel cellular sinr targets chosen uniformly in the interval ] is a pre - specified tolerance .form status indicator at femtocell : , where sinr target update : , where & * db target * & ( db ) & ( db ) & ( db ) & ( db ) & ( dbm ) + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + + + + +
|
in a two tier cellular network comprised of a central macrocell underlaid with shorter range femtocell hotspots cross - tier interference limits overall capacity with universal frequency reuse . to quantify near - far effects with universal frequency reuse , this paper derives a fundamental relation providing the largest feasible cellular signal - to - interference - plus - noise ratio ( sinr ) , given any set of feasible femtocell sinrs . we provide a link budget analysis which enables simple and accurate performance insights in a two - tier network . a distributed utility - based sinr adaptation at femtocells is proposed in order to alleviate cross - tier interference at the macrocell from cochannel femtocells . the foschini - miljanic ( fm ) algorithm is a special case of the adaptation . each femtocell maximizes their individual utility consisting of a sinr based reward less an incurred cost ( interference to the macrocell ) . numerical results show greater than improvement in mean femtocell sinrs relative to fm . in the event that cross - tier interference prevents a cellular user from obtaining its sinr target , an algorithm is proposed that reduces transmission powers of the strongest femtocell interferers . the algorithm ensures that a cellular user achieves its sinr target even with femtocells / cell - site , and requires a worst case sinr reduction of only at femtocells . these results motivate design of power control schemes requiring minimal network overhead in two - tier networks with shared spectrum .
|
latent variable models are a highly versatile tool for inferring the structures and hierarchies from unstructured data . typical use cases of latent variable models include learning topics from documents , building user profiles , predicting user behavior , and generating hierarchies of class labels .although latent variable models are arguably some of the most powerful tools in machine learning , they are often overlooked and underutilized in practice .using latent variable models has many challenges in real world settings : models are slow to compute , hardly scalable , and results are noisy and hard to visualize . while prior work in has addressed challenges in computational speed and scalability and systems such as have been proposed to visualize topic modeling results , there is yet a system that combines these components and presents the results to end users in an intuitive , effective , and meaningful way . in this paper , we present the architecture of a modularized distributed system built on top of latent variable models that is able to process , analyze , and intuitively visualize large volumes of amazon product reviews .we show a fully - featured demo using a completed prototype , and present the results with two case studies .the current amazon system for presenting reviews to a potential customer consists of three distinct views : customer quotes , most helpful reviews , and the full review list .below , we describe each of these in detail and outline the deficiencies that our system intends to resolve . the quotes view provides a quick overview of a product and consists of three single - sentence quotations extracted from user reviews that are intended to capture sentiment about different aspects of a product in a concise and user - friendly way . as an example , the macally power adapter quotes in figure [ figure_quotes ] display customer sentiment about the product s cord length and durability , a possibly annoying indicator light , and low cost relative to the brand - name alternative . while these quotes do provide insight into a product ,a severe limitation is that they only represent the reviews of a few representatives .regardless of how these representatives are chosen , their usefulness will always be limited by their inability to convey aggregate information on user sentiment towards different features of a product .we do , however , note the conciseness and simplicity of this representation . in designing our alternative systemwe attempt to retain as much of this simplicity as possible while improving information content .the most helpful reviews section , displayed on the amazon website above the full review list , provides the `` most helpful '' high and low rating reviews in a comparison - oriented view .an example of this is provided in figure [ figure_mosthelpful ] , again using the macally power adapter . with this view, users can see two contrasting product experiences that have been deemed legitimate by other customers .one benefit of this is noise - reduction via user helpfulness feedback .however , these `` most helpful reviews '' can often be quite long and may only capture a few facets of the product . on top of this ,helpfulness votes are strongly biased by a review s first vote a review that quickly gets a helpful vote is much more likely to get additional positive votes , and vice versa . in our system, we hope to address these problems by placing substantially less weight on helpfulness ratings when presenting information while simultaneously capturing a larger set of product facets and reducing the amount of textual content that needs to be read . the full review list in helpfulness - sorted order is provided as a final source of reference .as there are hundreds of reviews for even moderately popular products , it is generally infeasible for a potential customer to examine this entire list .however , this view does provide the informational completeness that the previous two views lack .our proposed system is designed to retain this completeness while simultaneously improving individual review search and allowing for multiple sorted review lists based on different aspects of a product that the user may be interested in .many potentially suitable latent variable models exist for the task of analyzing amazon reviews , such as unsupervised models described in , supervised models in , and hierarchical models in .however , few of them have been designed with efficient inference and sampling procedures , such as . for practical reasons ,we only illustrate how our system works with latent variable models accompanied with efficient samplers , such as those introduced in .our prototype makes use of latent dirichlet allocation ( lda ) , a widely used topic model in which one assumes that documents are generated from mixture distributions of language models associated with individual topics .that is , the documents are generated by the latent variable model below : + for each document draw a topic distribution from a dirichlet distribution with parameter for each topic draw a word distribution from a dirichlet distribution with parameter for each word in document draw a topic from the multinomial via draw a word from the multinomial via the dirichlet - multinomial design in this model makes it simple to do inference due to distribution conjugacy we can integrate out the multinomial parameters and , thus allowing one to express in a closed - form .this yields a gibbs sampler for drawing efficiently .the conditional probability is given by here the count variables and denote the number of occurrences of a particular ( topic , document ) and ( topic , word ) pair , or of a particular topic , respectively .moreover , the superscript denotes count when ignoring the pair .for instance , is obtained when ignoring the ( topic , word ) combination at position .finally , denotes the joint normalization .sampling from ( 5 ) requires time since we have nonzero terms in a sum that need to be normalized . in large datasets where the number of topics may be large , this is computationally costlyhowever , there are many approaches for substantially accelerating sampling speed by exploiting the topic sparsity to reduce time complexity to and further to , where denotes the number of topics instantiated in a document and denotes the number of topics instantiated for a word across all documents .we propose a system that is capable of handling concurrent requests of product names from many users , analyzing relevant reviews by efficiently sampling from latent variable models , and displaying the streamlined updates from the models in real - time .figure [ fig : arch ] shows the architectural design of our proposed system . in this design , almost every component is asynchronous and stateless .furthermore , many tasks can be performed in a distributed fashion in order to balance computational costs .the major components of our system are the distributed web servers , asynchronous task dispatcher , data warehouse , search engine , text pre - processing engine , modularized concurrent topic modeling engine , update pool , and quality evaluator .the workflow of the system is as follows : after a query is accepted by one of the web servers , a request is dispatched to the review search engine .the search engine determines if pre - processed results already exist in the data warehouse , and if not , how much further pre - processing is required .requests of pre - processing tasks are created and dispatched to the text processing engine , where background workers are constantly pre - processing fresh reviews while assigning highest priority to new requests from the search engine . when pre - processed data is ready , the results are dispatched to the concurrent topic modeling engine .multiple topic modeling instances are created at the same time , each emitting updates every few gibbs sampling iterations to a pool where the results are further evaluated for quality . after evaluation ,the best result is selected and sent back to the asynchronous task dispatcher , where it is later routed back to the initial web server .the web server then packages the result and returns it for presentation to the end user .our use of multiple samplers begs the following question : what is the best way to efficiently evaluate the quality of model results ?the most popular two evaluation metrics , log likelihood and perplexity , both have their own merits .compared to log likelihood , perplexity is typically regarded as a more accurate metric when the amount of test data is large , but is much slower to evaluate . in our system , we chose to use log - likelihood as the primary measure of quality for the following reasons : * the number of reviews can be very limited for some products. computing perplexity on insufficient test data may yield inaccurate evaluation of quality .* there is almost zero overhead for computing log - likelihood .the modularized and stateless design allows for easier scalability towards large scale processing .for example , cassandra for data storage , play ! for distributed web servers , parameter server for topic modeling engines , and uima for pre - processingcan all be swapped into our system in order to scale up with minimal modifications to other modules . to visualize the thousands to millions of parameters estimated by latent variable models, we design an intuitive visualization framework that is most suitable for topic models .we analyze the meaning and structure of the statistical models , their sparsity properties and interconnections , and how they can be mapped onto a two dimensional euclidean space .we additionally divide presentable information into a hierarchy in which a subset of information is available to the user at each level .our prototype system uses the electronics category from the snap amazon reviews dataset , a subset that consists of approximately 1 million reviews and 76,000 products .the primary components of our system pre - processing , database , modeling , and visualization are described below .our pre - processing pipeline is built primarily on stanford corenlp , a robust and highly modular package that has been shown to provide state - of - the - art performance on a variety of tasks . using corenlp, we transform raw review text into a lemmatized token form augmented with part - of - speech .ratings are used in place of sentiment to allow for a more intuitive per - topic rating and to reduce computation time . on our machine, we found that pre - processing time increased from 4.1ms to 706ms per review when sentiment analysis was added , making real - time sentiment computation with corenlp infeasible .similarly , our original intension was to include named - entity recognition in our pre - processing pipeline .our tests found that stanford s implementation of this is far too slow on our machine , adding named - entity recognition resulted in an increase in average review pre - processing time from 4.1ms to 93.6ms .while corenlp is used for base pre - processing , we defer stop word removal and instead filter out these words in real - time .this provides greater system flexibility and allows us to adjust stop word lists within seconds . from our tests, we found that filtering out tokens corresponding to words in a product s name resulted in substantially better and more visualizable topics .we also invested in augmenting mallet s standard stop word list with many amazon - specific stop words , such as `` review '' and `` star '' .we use high memory instances on google cloud engine ( gce ) for our review data warehouse .nosql databases such as cassandra are ideal candidates for our setting as we require a high degree of concurrency and high availability , but relatively low demand for consistency . for the prototype system ,a relational database is chosen to accomplish this task due to its simplicity and better performance on a relatively low volume of data . to reduce overhead ,the same database is used as a cache for pre - processed data .a background task runs continuously to pre - process fresh reviews , giving priority to reviews of products being currently queried .there are a few challenges in using a database for such a system .for instance , when a query is accepted , the system has to quickly determine if pre - processed reviews are available in the cache .if only part of the relevant reviews are cached , the fastest strategy to produce a reasonable response is to return the processed reviews and have the requester process the reviews not yet in the cache , then insert the processed results asynchronously as soon as they are available .when multiple queries are being processed concurrently , the chance that multiple on - demand pre - processing requests point to the same unprocessed review can not be neglected . in this case, the same unprocessed review may be processed multiple times by different threads , incurring both computational overhead and potential risks of cache deadlock and duplication .one solution to this problem is to employ a scheduler to manage processing and updating , avoiding duplicate processing through the use of efficient hashing .we use a customized version of the open - source machine learning language toolkit ( mallet ) as our modeling base . by generating a topic model from each product in a comparison in parallel ,we achieve a significant speedup compared to mallet s single threaded mode and substantially less overhead compared to mallet s multithreaded mode .our system is flexible with respect to the number of topics in that any number of topics can always be reduced to a core set for presentation to the user . since mallet s implementation of lda uses gibbs sampling, we continue sampling until relative convergence .we empirically determine the number of iterations for producing good results .in addition , we perform alpha / beta optimization every 100 iterations . we estimate review - topic and word - topic distributions , which we then feed into our model summarization engine .our modified version of mallet allows us to compute and emit intermediate summaries after the 10th iteration of sampling and again every 2 seconds until sampling is complete .this gives the user immediate feedback and allows us to provide improved results as they become available . from our topic model and reviews, we summarize each product with its topics and review summaries .each topic structure contains a probability , list of augmented lemmas , rating , nearest topic and distance to that topic for the comparison product , and a representative review .probabilities represent the overall topic distribution for the model generated from a product s reviews , which we use as our metric for topic relevance when presenting results to the end user .each augmented lemma consists of the lemma text and both normalized and unnormalized word - topic weights . to compute nearest topics , we chose to use the hellinger distance metric between normalized word - topic weights .per - topic ratings are estimated as $ ] .+ the top review for each topic is found by maximizing the following metric over : here , , refer to the helpfulness and unhelpfulness votes for review , respectively .this metric was designed to address several problems that arise when using alone .namely , we add a slight preference towards reviews that were voted to be helpful in order to avoid poor - quality representatives .we also bias our metric towards reviews in which the user rated the product similarly to the topic s aggregate rating , reducing user confusion and the frequency of unexpected results ; a user browsing a 1-star topic does not expect to see a 5-star representative review praising the product . from our experiments, we found this metric to produce good results and have subjectively found it to offer more informative and intuitive representatives compared to the pure probability - based baseline .we also experimented with weighted n - gram topic visualization as an alternative to singular lemmas , however we found that single words resulted in simpler and more easily interpretable word clouds than n - grams .as previously mentioned , our backend system also computes review summaries .the fields of this structure user i d , profile name , helpful votes , unhelpful votes , rating , time , summary , and probabilities are self - explanatory .a key thing to note is our exclusion of the full review text . by doing this , we substantially reduce bandwidth costs and response time while allowing for later querying of individual reviews when requested . to achieve satisfactory performance , we made use of multiple rendering buffers and an advanced object management framework .the geometry of each data point is computed in real - time .review contents are pulled from servers on - demand to minimize traffic .we additionally made use of many invisible techniques to minimize traffic , computation , and rendering efforts in order to maintain a smooth user experience .our final prototype allows a user to query product reviews using an intuitive web interface and visualize initial results in under 3 seconds in the majority of instances .the interface allows for easy topic rating visualization with the ability to quickly select individual reviews .selecting a topic brings up a side panel containing a topic - specific product comparison along with topic - sorted review summaries .further details are available in the case studies below .below , we provide two product comparison case studies to demonstrate the utility and simplicity of our system . in the first , we compare the macally ps - ac4 ac power adapter for apple g4 to the apple usb power adapter for ipod ( white ) .we then repeat this analysis with a separate case consisting of two cameras , the canon digital rebel xt 8mp digital slr camera with ef - s 18 - 55 mm f3.5 - 5.6 lens ( black ) and the sony cybershot dsc - t1 5mp digital camera with 3x optical zoom .figure [ figure_powercase ] shows the interface a user of our system would see when comparing the two power adapters , using the macally adapter as the reference product . on our machine ,an initial view was available in approximately 2 seconds with updates continuing for a few seconds afterwards . from the word clouds, we immediately see that customers tend to think highly of the macally adapter s power cord , giving the topic high weight and an associated rating of 4.1 .in contrast , we see 2.5 stars attributed to a topic dominated by `` work '' , `` computer '' , and `` battery '' .clicking on this topic circle brings up the panel on the right , where we quickly learn that this product has a tendency to die after a few months and intermittently not work , failing to charge the the customer s computer .figure [ figure_cameracase ] shows the side panel displayed for a single topic in the sony - canon camera comparison . as we can see , the sony camera ( left column ) has potential red - eye and flash problems , whereas the nearest topic from the canon model suggests higher design and picture quality .the relatively high similarity ( 37% ) of these topics suggests that the two do indeed describe the same facet of the cameras .looking at the top topic reviews for the sony camera , we see concerns about indoor picture quality compared to the canon . as such , userswhose primary concern is picture quality can quickly see that the canon is generally regarded as superior in this area .however , cost conscious users that require a cheaper and more compact device can also benefit from this interface by learning that the sony camera s problems tend to stem from picture quality issues rather than device failures .for future work , we wish to improve the scalability and performance of our system . in terms of modeling performance , replacing mallet with higher performing implementations based on ideas described in would reduce sampling time and allow our system to scale to a larger number of users .we also expect future project iterations to use a continuously updating , crawler - based system in order to provide more up - to - date information and allow us to pre - process new data as it occurs .we also plan to explore alternative topic distance metrics . compared to our limited hellinger distance metric, we suspect that including part - of - speech information will lead to better pairings .specifically , we plan to experiment with computing distances on nouns only , allowing sentiment ( typically manifested in adjective use ) to arise naturally .related part - of - speech experiments may also prove useful .a known deficiency of topic models is the presence of noisy topics . paralleling our use of per - topic customer ratings ,we plan to explore per - topic helpfulness ratings based on our intuition that noisy and less informative topics would receive a higher proportion of weighted unhelpful votes and can therefore be automatically pruned out .fay chang , jeffrey dean , sanjay ghemawat , wilson c hsieh , deborah a wallach , mike burrows , tushar chandra , andrew fikes , and robert e gruber .bigtable : a distributed storage system for structured data ., 26(2):4 , 2008 .
|
in this project we outline a modularized , scalable system for comparing amazon products in an interactive and informative way using efficient latent variable models and dynamic visualization . we demonstrate how our system can build on the structure and rich review information of amazon products in order to provide a fast , multifaceted , and intuitive comparison . by providing a condensed per - topic comparison visualization to the user , we are able to display aggregate information from the entire set of reviews while providing an interface that is at least as compact as the `` most helpful reviews '' currently displayed by amazon , yet far more informative .
|
setting limits for new particles or decay modes has been an active research area for many years . in high energy physicsit received renewed interest with the unified method by feldman and cousins .giunti and roe and woodroofe gave variations of the unified method , trying to resolve an apparent anomaly when there are fewer events in the signal region than expected .they all discuss the problem of setting limits for the case of a known background rate .the case of an unknown background rate was discussed in a conference talk by feldman and a method for handling this case was developed by rolke and lpez .little work has been done though on the question of claiming a discovery .this problem could be handled by finding a confidence interval and claiming a discovery if the lower limit is positive .instead the question of a discovery should be done separately , by performing a hypothesis test with the null hypothesis :there is no signal present . rejecting this hypothesiswill then lead to a claim for a new discovery . in carrying out a hypothesis testone needs to decide on the type i error probability , the probability of falsely rejecting the null hypothesis .this is of course equivalent to the major mistake to be guarded against , namely that of falsely claiming a discovery . in practice a hypothesis testis often carried out by finding the p - value .this is the probability that an identical experiment will yield a result as extreme ( with respect to the null hypothesis ) or even more so given that the null hypothesis is true .then if we reject ; otherwise we fail to do so . for the test discussed hereit is not possible to compute the p - value analytically , and therefore we will find the p - value via monte carlo .maybe the most important decision in carrying out a hypothesis test is the choice of , or what we might call the discovery threshold .as we shall see , this decision is made much easier by the method described here because we will need only one threshold , regardless of how the analysis was done .what a proper discovery threshold should be in high energy physics is a question outside the scope of this paper , although we might suggest ( roughly equivalent to ) .sinervo argues for a much stricter standard of , or .we believe that such extreme values were used in the past because it was felt that the calculated p values were biased downward by the analysis process , and a small was needed in order to compensate for any unwittingly introduced biases . if we were to trust that our p - value is in fact correct , a in error rate should to be acceptable .a general introduction to hypothesis testing with applications to high energy physics is given in sinervo .a classic reference for the theory of hypothesis testing is lehmann .our test uses or as the test statistic , depending on whether the background rate is assumed to be known or not . here is the number of observations in the signal region , is the number of observations in the background region and is the probability that a background event falls into the background region divided by the probability that it falls into the signal region .therefore is the estimated background in the signal region and is an estimate for the signal rate . is the maximum likelihood estimator of , and it is the quantity used in feldman and cousins without being set to when is negative .this is not necessary here because a negative value of will clearly lead to a failure to reject .other choices for the test statistic are of course possible . for example , a measure for the size of a signal that is often used in high energy physics is . under the null hypothesisthis statistic is approximately gaussian , at least if there is sufficient data .unfortunately the approximation is not sufficiently good in the extreme tails where a new discovery is made , leading to p - values that are much smaller than is warranted . even when using monte carlo to compute the true p - value , this test statistic can be shown to be inferior to the one proposed in our method because it has consistently lower power , that is its probability of detecting a real signal is smaller . in order to find the p - value of the test we need to know the null distribution . in the simplest case of a known background rate and everything else fixed thisis given by the poisson distribution , but in all other cases it is not possible to compute the null distribution analytically , and we will therefore find it via monte carlo . as an illustration consider the following case shown in figure 1 : here we have events on the interval ] .there are events in the signal region , and the background distribution is known to be flat .therefore we find , , and . because we know that the background is flat on ] and computing for this monte carlo data set . repeating this timeswe find the histogram of monte carlo values shown in figure 2 , case 1 . in this simulation of the simulation runs had a value of greater than , or .using we would therefore reject the null hypothesis and claim a discovery .note that in addition to rejecting the null hypothesis we can also turn the p - value into a significance by using the gaussian distribution and claim that this signal is a effect . how would things change if the signal region had not been fixed a priori but instead was found by searching through all signal regions centered at we would have accepted any signal with a width between and ? that is if we had kept the signal location fixed but find the signal width that maximizes , the estimate of the number of signal events ?again we can find the null distribution via monte carlo , repeating the exact analysis for each simulation run individually .the histogram of values for this case is shown in figure 2 , case 2 . herewe find a value of larger than in of the runs for a p - value of or . at a discovery threshold of would therefore not find this signal significant anymore .even more , what if we also let the signal location vary , say anywhere in ] as the signal region and ] and to signals at least and at most wide , because otherwise the largest value of is almost always found for a very wide signal region , even when a clear narrow signal is present .this restriction will not induce a bias as long as the decision on where to search are made a priori .in the general situation where the background is not flat on ] and ( if present ) a gaussian signal at with a width of .then we find the signal through a variety of situations , from the one extreme where everything is fixed a priori to the other where the largest signal of any width is found . the background density is found by fitting and the background rate is estimated .the power curves are shown in figure 3 .no matter what combination of items were fixed a priori or were used to maximize the test statistic , and with it the signal to noise ratio , all cases achieved the desired type i error probability , .not surprisingly the more items are fixed a priori , the better the power of the test .we have described a statistical hypothesis test for the presence of a signal .our test is conceptually simple and very flexible , allowing the researcher a wide variety of choices during the analysis stage .it will yield the correct type i error probability as long as the monte carlo used to find the null distribution exactly mirrors the steps taken for the data .monte carlo studies have shown that this method has satisfactory power. 1 r.d .cousins , g.j .feldman , a unified approach to the classical statistical analysis of small signals , phys .rev , d57 , ( 1998 ) 3873.c .giunti , a new ordering principle for the classical statistical analysis of poisson processes with background , phys .rev d59 , 053001 ( 1999 ) .roe , m.b .woodroofe , improved probability method for estimating signal in the presence of background , phys .rev d60 053009 ( 1999 ) g. feldman , multiple measurements and parameters in the unified approach , talk at fermilab workshop on confidence limits 27 - 28 march , 2000 , http://conferences.fnal.gov/cl2k/ , p.10 - 14 .rolke , a.m. lpez , confidence intervals and upper bounds for small signals in the presence of background noise , nucl .inst . and methods a458 ( 2001 ) 745 - 758 p.k .sinervo , signal significance in particle physics , proceedings of the conference : advanced statistical techniques in particle physics , institute for particle physics phenomenology , university of durham , uk , 2002 , 64 - 76 .lehmann testing statistical hypotheses , 2nd ed .( 1986 ) wiley , new yorkthis work was partially supported by the division of high energy physics of the us department of energy .
|
we describe a statistical hypothesis test for the presence of a signal . the test allows the researcher to fix the signal location and/or width a priori , or perform a search to find the signal region that maximizes the signal . the background rate and/or distribution can be known or might be estimated from the data . cuts can be used to bring out the signal .
|
recently , there has been a considerable interest in studying the fractional poisson process ( fpp ) .the early development of the fpp is due .later , a rich growth of the literature is contributed by .it is proved in that the fpp can be seen as the subordination of the poisson process by an independent inverse -stable subordinator , that is , where is the poisson process with rate and is the inverse -stable subordinator .the relation between the inverse -stable subordinator and the -stable subordinator is where the laplace transform ( lt ) of the -stable subordinator is given by =e^{-ts^\beta} ] satisfies ( see ) & & ^_tp__(n|t , ) & = - , n1 , & [ fpp - definition ] + & & ^_tp__(0|t , ) & = -p__(0|t , ) , & here , denotes the caputo - fractional derivative defined by the _ pmf _ of the fpp is given by ( see ) the mean , variance and covariance functions ( see ) of the fpp are given by & = qt^{\beta};~ \mbox{var}[n_{\beta}(t,\lambda)]=q t^{\beta}+rt^{2\beta } , \label{fppmean } \\ \text{cov}[n_{\beta}(s,\lambda),n_{\beta}(t,\lambda)]&=qs^{\beta}+ ds^{2\beta}+ q^{2}[\beta t^{2\beta}b(\beta,1+\beta;s / t)-(st)^{\beta}],~0<s\leq t,\label{fpp - cov}\end{aligned}\ ] ] where , , , and , is the incomplete beta function .in this section , we consider the fpp time - changed by a subordinator , defined in , for which the moments <\infty ] let be the tempered -stable subordinator with lt =e^{-t\left((\mu+s)^\alpha-\mu^\alpha\right)}.\ ] ] the _ pdf _ of the tempered -stable subordinator is given by ( see ( * ? ? ? * eq . ( 2.2 ) ) ) where is the _ pdf _ of the -stable subordinator .the fpp time - changed by an independent tempered -stable subordinator is defined as in this case , the _ pmf _ reduces to =\frac{\lambda^ne^{\mu^\beta t}}{n!}\sum_{k=0}^{\infty}\frac{(n+k)!}{k!}\frac{(-\lambda)^k}{\gamma(\beta(k+n)+1)}\mathbb{e}[(d_{\alpha}(t))^{\beta(n+k)}e^{-\mu d_\alpha(t)}],~n\geq0.\ ] ] it is easy to show that =1. ] we next obtain the mean , the variance and the covariance functions of the tcfpp - i .[ mean - var - cov - tcfpp-1 ] let and .the distributional properties of the tcfpp - i are as follows : + ( i ) =q\mathbb{e}[d_{f}^{\beta}(t)] ] , [ q^f_(s),q^f_(t)]&=q[d_f^(s)]+ d[d_f^2(s)]-q^2[d_f^(s)][d_f^(t ) ] & & & + & + q^2 . & using , we get &=\mathbb{e}\big[\mathbb{e}[n_{\beta}(d_{f}(t))|d_{f}(t)]\big]=q\mathbb{e}[d_{f}^{\beta}(t ) ] , \end{aligned}\ ] ] which proves part ( i ) . from and , =qs^{\beta}+ds^{2\beta}+ q^{2}\beta\left[t^{2\beta}b(\beta,1+\beta;s / t)\right],\ ] ] which leads to &=\mathbb{e}\left[\mathbb{e}[n_{\beta}(d_{f}(s))n_{\beta}(d_{f}(t))|(d_{f}(s),d_{f}(t))]\right]\nonumber\\ & = q\mathbb{e}[d_{f}^{\beta}(s)]+ d\mathbb{e}[d_{f}^{2\beta}(s)]+\beta q^{2 } \mathbb{e}\left[d_{f}^{2\beta}(t)b\left(\beta,1+\beta;\frac{d_{f}(s)}{d_{f}(t)}\right)\right].\label{bivariate - fnbfp } \end{aligned}\ ] ] by and , part ( iii ) follows . part ( ii ) follows from part ( iii ) by putting .* index of dispersion . * the index of dispersion for a counting process is defined by ( see ) }{\mathbb{e}[x(t)]}.\ ] ] the stochastic process is said to be overdispersed if for all ( see ) . since the mean of the tcfpp - i is nonnegative , it suffices to show that var-\mathbb{e}[q_{\beta}^{f}(t)]>0 ] , where is the bernstein function defined in .if \rightarrow\infty ] is \sim\e\left[d_{f } ^{\beta}(s)\right]\e\left[d_{f } ^{\beta}(t - s)\right ] .\end{aligned}\ ] ] ( ii ) the asymptotic expansion of t ] as , is satisfied for the following subordinators .+ ( a ) gamma subordinator : it is known ( see ) that =\frac{\gamma(pt+\beta)}{\alpha^\beta\gamma(pt)}\sim \left(\frac{pt}{\alpha}\right)^\beta ] as .+ ( b ) inverse gaussian subordinator : the moments of inverse gaussian subordinator are given in .the asymptotic expansion of , for large , is ( see ( * ? ? ?* eq . ( a.9 ) ) ) where . for , the asymptotic expansion of , as , is =\left(\frac{\delta t}{\gamma}\right)^\beta\left(1+o\left(\tfrac{1}{t}\right)\right),~~(\text{using } \eqref{bessel - asym})\ ] ] which implies that \to\infty ] that satisfies \leq c_2(s)t^{-d},\ ] ] for large , , and .that is , }{t^{-d}}=c(s),\]]for some and we say has the long - range dependence ( lrd ) property if and has the short - range dependence ( srd ) property if .note implies that corr ] and \sim k_2t^{2\rho } , ] given in theorem [ mean - var - cov - tcfpp-1 ] ( iii ) , namely , .\ ] ] using theorem [ lemma - asym ] ( ii ) , we get for large , \e[d_f^{\beta}(t - s)]\label{autocovariance - last - summand-1}. \end{aligned}\ ] ] using , theorem [ mean - var - cov - tcfpp-1 ] ( iii ) and \sim k_1t^\rho ] , we have that &\sim qk_1t^\rho- q^{2}(k_1t^\rho ) ^2 + 2dk_2t^{2\rho}\nonumber\\ & \sim 2dk_2t^{2\rho}- q^{2}k_1 ^ 2t^{2\rho}~~(\text{see definition } \ref{def - asym})\nonumber\\ & = \frac{\lambda^2}{\beta}\left(\frac{k_2}{\gamma(2\beta)}-\frac{k_1 ^ 2}{\beta\gamma^2(\beta)}\right)t^{2\rho}\nonumber\\ & = d_1t^{2\rho},\label{variance - large - t } \end{aligned}\ ] ] where .thus , from and , we have for large , &\sim\frac{q\mathbb{e}[d_f^{\beta}(s)]+d\mathbb{e}[d_f^{2\beta}(s)]-q^{2}k_1s\rho \mathbb{e}[d_f^{\beta}(s)]t^{\rho-1}}{\sqrt{\text{var}[q^f_{\beta}(s)]}\sqrt{d_1t^{2\rho}}}\nonumber\\ & = \left(\frac{q\mathbb{e}[d_f^{\beta}(s)]+d\mathbb{e}[d_f^{2\beta}(s)]}{\sqrt{d_{1}\text{var}[q^f_{\beta}(s)]}}\right)t^{-\rho}-\frac{q^{2}k_1s\rho \mathbb{e}[d_f^{\beta}(s)]}{\sqrt{t^{2\rho}d_{1}}\sqrt{\text{var}[q^f_{\beta}(s)]}}t^{-1}\nonumber\\ & \sim \left(\frac{q\mathbb{e}[d_f^{\beta}(s)]+d\mathbb{e}[d_f^{2\beta}(s)]}{\sqrt{d_{1}\text{var}[q^f_{\beta}(s)]}}\right)t^{-\rho}\nonumber , \end{aligned}\ ] ] which decays like the power law .hence , the tcfpp - i exhibits the lrd property . from remark[ example - remark ] , it can be seen that moments of the gamma subordinator has the asymptotic expansion \sim ( p/\alpha)^\beta t^\beta ] .therefore , the fnbp exhibits the lrd property .similarly , for the inverse gaussian subordinator , we have the asymptotic expansion \sim \left(\delta /\gamma\right)^\beta t^\beta ] .hence , also has the lrd property .we call a function regularly varying at 0 + with index ( see ) if we first reproduce the following law of iterated logarithm ( lil ) for the subordinator from ( * ? ? ?* chapter iii , theorem 14 ) .let be a subordinator with =e^{-tf(s)} ] . here ,the corresponding bernstein function is regularly varying with index . therefore , by corollary [ lil - coro ] , we have the lil for the space fractional poisson process with first - exit time of the subordinator is its right - continuous inverse , defined by and is called an _ inverse subordinator _ ( see ) .note that for any , <\infty ] , + ( ii)=q\mathbb{e}[e_{f}^{\beta}(t)]\left(1-q\mathbb{e}[e_{f}^{\beta}(t)]\right)+2d\mathbb{e}[e_{f}^{2\beta}(t)] ] .therefore , we study the asymptotic behavior of ] .the lt of the -th moment of is given by ( see ) where is the bernstein function associated with .the asymptotic moments can be specifically computed for special cases , which also serves examples of the tcfpp - ii processes .we study the fpp time - changed by the inverse of the gamma subordinator , with corresponding bernstein function .the right - continuous inverse of the gamma subordinator is defined as we study the asymptotic behavior of the mean of , that is , the function =q\mathbb{e}[e_y^\beta(t)] ] is given by \right]=\frac{\gamma(1+\beta)}{s(p\log(1+s/\alpha))^\beta}.\ ] ] note that as .now using theorem [ tauberian ] , we get ( see also ( * ? ? ?* proposition 4.1 ) ) &=q\e[e^\beta_y(t)]\sim q(t\alpha / p)^\beta,~\text{as } t\to\infty .\end{aligned}\ ] ] the asymptotic behavior of variance function of can also be computed using above expression .we next show that the tcfpp - ii is a renewal process .we begin with the following lemma .+ let be a subordinator with the associated bernstein function .let be the right - continuous inverse of .we call , rather loosely , the inverse subordinator corresponding to .let and be two independent inverse subordinators corresponding to bernstein functions and , respectively .then where consider next have the inverse subordinators defined by then the process which completes the proof .[ lemma - ebeta ] let be inverse -stable subordinator corresponding to , and be an inverse subordinator corresponding to .then from , where . one can further generalize the tcfpp - i process and tcfpp - ii process , by subordinating it again with a subordinator and an inverse subordinator , respectively . as it clearly shown in and , the subordination of subordinator and inverse subordinator yields again a subordinator and an inverse subordinator , respectively . hence, further subordination leads again to the processes of type tcfpp - i and tcfpp - ii .this is also valid for -iterated subordination .the tcfpp - ii is a renewal process with _ iid _ waiting times with distribution =\mathbb{e}[e^{-\lambda e_{\phi}(t)}],\ ] ] where is the inverse subordinator corresponding to . using and corollary [ lemma - ebeta ], we have where therefore , the tcfpp - ii is a poisson process time - changed by an inverse subordinator corresponding to bernstein function from ( * ? ? ?* theorem 4.1 ) , we deduce that the time - changed poisson process is a renewal process with _ iid _ waiting times having the distribution . by (* remark 5.4 ) , the _ pmf _ , given in , of the tcfpp - ii satisfies in the mild sense , where , is the lvy measure associated to bernstein function and is the heaviside function .we next present the bivariate distributions of the tcfpp - ii , which generalizes a result by ( * ? ? ?* theorem 2.1 ) .let be the distribution function of the waiting time and be the time of jump .since s are _ iid _ , we have that =f^{\ast n}(t) ] and =df^{\ast k}(t) ] and is the -fold convolution of , with , the dirac delta function at zero ._ case 1 _ : when , we have ( see figure [ fig : mequaln ] ) ( -.31,0 ) ( 9.3,0 ) ; ( 0,-.1 ) ( 0 , .1 ) node[below ] at ( 0,-.1 ) 0 ; ( 2.5,-.1 ) ( 2.5 , .1 ) node[above ] at ( 2.5,.1 ) ; ( 3,-.1 ) ( 3 , .1 ) node[below ] at ( 3,-.1 ) ; ( 7,-.1 ) ( 7 , .1 ) node[below ] at ( 7,-.1 ) ; ( 7.5,-.1 ) ( 7.5 , .1)node[above ] at ( 7.5,.1 ) ; ( 2.5,-0.7 ) ( 7.5,-0.7 ) node[below ] at ( 5,-.6 ) ; + &=\mathbb{p}\big[0<s_m\leq s;~t < s_{m+1}\big]=\mathbb{p}\big[0<s_m\leq s;~t < s_{m}+\tau_m^{(1)}\big]\nonumber\\ & = \mathbb{p}\big[0<s_m\leq s;~\tau_m^{(1)}>t - s_m\big]\\ & = \int_{0}^{s}df^{*m}(u)\mathbb{p}[\tau_m^{(1)}>t - u\big]~~(\text{since } s_m\text { and } \tau_m^{(1 ) } \text { are independent})\\ & = \int_{0}^{s}\mathbb{e}[e^{-\lambda e_{\phi}(t - u)}]df^{\ast m}(u ) . \end{aligned}\ ] ] _ case 2 : _when , it follows that ( see figure [ fig : mlessn ] ) ( -.31,0 ) ( 9.3,0 ) ; ( 0,-.1 ) ( 0 , .1 ) node[below ] at ( 0,-.1 ) 0 ; ( 2.5,-.1 ) ( 2.5 , .1 ) node[above ] at ( 2.5,.1 ) ; ( 3,-.1 ) ( 3 , .1 ) node[below ] at ( 3,-.1 ) ; ( 3.7,-.1 ) ( 3.7 , .1)node[above ] at ( 3.7,.1 ) ; ( 6.3,-.1 ) ( 6.3 , .1)node[above ] at ( 6.3,.1 ) ; ( 7,-.1 ) ( 7 , .1 ) node[below ] at ( 7,-.1 ) ; ( 8,-.1 ) ( 8 , .1)node[above ] at ( 8,.1 ) ; ( 2.5,-1 ) ( 3.7,-1 ) node[below ] at ( 3.1,-1.1 ) ; ( 3.7,-0.5 ) ( 6.3,-0.5 ) node[below ] at ( 5,-.6 ) ; ( 6.3,-1 ) ( 8,-1 ) node[below ] at ( 7.2,-1.1 ) ; \nonumber\\&=\mathbb{p}\big[0<s_m\leq s;~\tau^{(1)}_m > s - s_m;~\tau^{(1)}_m < t - s_m;0<\tau_{m+1}^{(n - m-1)}<t - s_{m+1};~\tau^{(1)}_m > t - s_n\big]\nonumber\\ & = \mathbb{p}\big[0<s_m\leq s;s - s_m<\tau^{(1)}_m < t - s_m;0<\tau_{m+1}^{(n - m-1)}<t - s_{m}-\tau^{(1)}_m;\nonumber\\&\hspace*{10cm}\tau^{(1)}_n > t - s_m-\tau^{(1)}_m-\tau_{m+1}^{(n - m-1)}\big].\nonumber \end{aligned}\ ] ] since the waiting times between events are _ iid _ , we have that \\ & = \int_{0}^{s}\mathbb{p}[s_m\in du ] \int_{s - u}^{t - u}\mathbb{p}[\tau^{(1)}_m\in dv]\int_{0}^{t-(u+v)}\mathbb{p}[\tau_{m+1}^{(n - m-1)}\in dw]\nonumber\int_{t-(u+v+w)}^{\infty}\mathbb{p}\left[\tau^{(1)}_n\indx\right]\nonumber\\ & = \int_{0}^{s}df^{\ast m}(u ) \int_{s - u}^{t - u}df(v)\int_{0}^{t-(u+v)}df^{\ast ( n - m-1)}(w)\mathbb{p}[\tau^{(1)}_n>(t - u - v - w)]\nonumber\\ & = \int_{0}^{s}df^{\ast m}(u ) \int_{s - u}^{t - u}df(v)\int_{0}^{t-(u+v)}\mathbb{e}[e^{-\lambda e_{\phi}(t - u - v - w)}]df^{\ast ( n - m-1)}(w)\nonumber , \end{aligned}\ ] ] which completes the proof .let us examine a special case of theorem [ bivariate - thm ] for the fpp .in this section , we present simulated sample paths for some tcfpp - i and tcfpp - ii processes .the sample paths for the fnbp , the fpp subordinated with tempered -stable subordinator ( fpp - tss ) and the fpp subordinated with inverse gaussian subordinator ( fpp - ign ) are presented for a chosen set of parameters .the simulations of the corresponding tcfpp - ii process of the fpp subordinated with inverse gamma subordinator ( fpp - ig ) , the fpp subordinated with inverse tempered -stable subordinator ( fpp - itss ) , and the fpp subordinated with inverse of inverse gaussian subordinator ( fpp - iign ) are also given in this section .we first present the algorithm for simulation of the fpp .[ simulation - fpp ] this algorithm ( see ) gives the number of events of the fpp up to a fixed time .a. fix the parameters and for the fpp .b. set and c. repeat while 1 .generate three independent uniform random variables , .2 . compute ( see ) ^{1/\beta-1}}{[\sin(\pi u_2)]^{1/\beta}|\ln u_3|^{1/\beta-1}}.\ ] ] 3 . and .d. next .then denotes the number of events occurred up to time .we next present the algorithms for the simulation of the gamma subordinator , the tempered -stable subordinator and the inverse gaussian subordinator .the generated sample paths from these algorithms will then be used to simulate the inverse subordinator and the tcfpp - i .[ simulation - gamma ] a. b. fix the parameters and for gamma subordinator .c. choose an interval . ] choose time points d. simulate for from the algorithm 3.2 of .e. compute the increments with f. the discretized sample path of at is [ simulation - ig ] the algorithm to generate the ign random variables is given in .a. choose an interval . ] random variable .5 . if , return ; else return . c. assign .the discretized sample path of at is consider next the algorithm to simulate the inverse subordinator .we first define with the step length as ( see ) where is the value of the subordinator evaluated at , which can be simulated by using the method presented above .observe that trajectory of has increments of length at random time instants governed by process and therefore is the approximation of operational time .a. b. fix the parameters for the inverse subordinator , whichever under consideration .c. choose uniformly spaced time points with d. let and .e. repeat while 1 .generate an independent random variables with 2 .set .3 . 4 .next .f. the discretized sample path of at is with note that the simulations for the inverse of gamma subordinator , the inverse of tempered -stable subordinator and the inverse of inverse gaussian subordinator can be done using the above algorithm by replacing the special case for the subordinator .+ we next present a general algorithm to simulate the tcfpp - i , namely the fnbp , the fpp - tss and the fpp - ign processes .the same algorithm can be used to simulate the tcfpp - ii , namely the fpp - ig , the fpp - itss and the fpp - iign processes .[ simulation - fnbp ] a. b.fix the parameters for the subordinator ( inverse subordinator ) , under consideration . choose the fractional index and rate parameter for the fpp .c. fix the time for the time interval $ ] and choose uniformly spaced time points with .d. simulate the values of the subordinator ( inverse subordinator ) at using the algorithm for respective subordinator ( inverse subordinator ) .e. using the values generated in step ( c ) , as time points , compute the number of events of the fpp using algorithm [ simulation - fpp ] .0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.51 0.51 0.51 0.51a part of this work was done while the second author was visiting the department of statistics and probability , michigan state university , during summer-2016 . [ 2 ] http://www.ams.org/mathscinet-getitem?mr=#1[#2 ] [ 2]#2 alrawashdeh , m. s. , kelly , j. f. , meerschaert , m. m. and scheffler , h. -p . :applications of inverse tempered stable subordinators , comput ., in press , ( 2016 ) .applebaum , d. : _ lvy processes and stochastic calculus_. second ed ., cambridge university press , cambridge , 2009 .veillette , m. and taqqu , m. s. : numerical computation of first passage times of increasing lvy processes .* 12 * , ( 2010 ) , 695729 .vellaisamy , p. and kumar , a. : first - exit times of an inverse gaussian process , revised for stochastics , ( 2016 ) .
|
in this paper , we study the fractional poisson process ( fpp ) time - changed by an independent lvy subordinator and the inverse of the lvy subordinator , which we call tcfpp - i and tcfpp - ii , respectively . various distributional properties of these processes are established . we show that , under certain conditions , the tcfpp - i has the long - range dependence property and also its law of iterated logarithm is proved . it is shown that the tcfpp - ii is a renewal process and its waiting time distribution is identified . its bivariate distributions and also the governing difference - differential equation are derived . some specific examples for both the processes are discussed . finally , we present the simulations of the sample paths of these processes .
|
there are many fascinating issues associated with eternal inflation , so i can think of no subject more appropriate to discuss in a volume commemorating david schramm .the shock of dave s untimely death showed that even the most vibrant of human lives is not eternal , but his continued influence on our entire field proves that in many ways david schramm is truly eternal .dave is largely responsible for creating the interface between particle physics and cosmology , and is very much responsible for cementing together the community in which this interface developed . his warmth , his enthusiasm , andthe efforts that he made to welcome young scientists to the field have strengthened our community in a way that will not be forgotten .i will begin by summarizing the basics of inflation , including a discussion of how inflation works , and why many of us believe that our universe almost certainly evolved through some form of inflation .this material is not new , but i think it should certainly be included in any volume that attempts to summarize the important advances that dave helped to develop and promote. then i will move on to discuss eternal inflation , attempting to emphasize that this topic has important implications , and raises important questions , which should not be dismissed as being metaphysical .the key property of the laws of physics that makes inflation possible is the existence of states of matter that have a high energy density which can not be rapidly lowered . in the original version of the inflationary theory ,the proposed state was a scalar field in a local minimum of its potential energy function .a similar proposal was advanced by starobinsky , in which the high energy density state was achieved by curved space corrections to the energy - momentum tensor of a scalar field .the scalar field state employed in the original version of inflation is called a _ false vacuum _ , since the state temporarily acts as if it were the state of lowest possible energy density .classically this state would be completely stable , because there would be no energy available to allow the scalar field to cross the potential energy barrier that separates it from states of lower energy .quantum mechanically , however , the state would decay by tunneling .initially it was hoped that this tunneling process could successfully end inflation , but it was soon found that the randomness of false vacuum decay would produce catastrophically large inhomogeneities .these problems were summarized in ref . , and described more fully by hawking , moss , and stewart and by guth and weinberg .this `` graceful exit '' problem was solved by the invention of the new inflationary universe model by linde and by albrecht and steinhardt .new inflation achieved all the successes that had been hoped for in the context of the original version . in this theory inflationis driven by a scalar field perched on a plateau of the potential energy diagram , as shown in fig .[ newinf ] .such a scalar field is generically called the _ inflaton_. if the plateau is flat enough , such a state can be stable enough for successful inflation .soon afterwards linde showed that the inflaton potential need not have either a local minimum or a gentle plateau : in the scenario he dubbed _chaotic inflation _ , the inflaton potential can be as simple as provided that begins at a large enough value so that inflation can occur as it relaxes .for simplicity of language , i will stretch the meaning of the phrase `` false vacuum '' to include all of these cases ; that is , i will use the phrase to denote any state with a high energy density that can not be rapidly decreased .note that while inflation was originally developed in the context of grand unified theories , the only real requirement on the particle physics is the existence of a false vacuum state . in this sectioni will summarize the workings of new inflation , and in the following section i will discuss chaotic inflation . while more complicated possibilities ( e.g. hybrid inflation and supernatural inflation ) appear very plausible , the basic scenarios of new and chaotic inflation will be sufficient to illustrate the physical effects that i want to discuss in this article .suppose that the energy density of a state is approximately equal to a constant value . then, if a region filled with this state of matter expanded by an amount , its energy would have to increase by this energy must be supplied by whatever force is causing the expansion , which means that the force must be pulling against a negative pressure . the work done by the force is given by where is the pressure inside the expanding region . equating the work with the change in energy, one finds this negative pressure is the driving force behind inflation .when one puts this negative pressure into einstein s equations , one finds that it leads to a repulsion , causing such a region to undergo exponential expansion .if the region can be approximated as isotropic and homogeneous , this result can be seen from the standard friedmann - robertson - walker ( frw ) equations : where is the scale factor , is newton s constant , and we adopt units for which . for late times the growing solution to this equation has the form of course inflationary theorists prefer not to assume that the universe began homogeneously and isotropically , but there is considerable evidence for the `` cosmological no - hair conjecture '' , which implies that a wide class of initial states will approach this exponentially expanding solution . the basic scenario of new inflation begins by assuming that at least some patch of the early universe was in this peculiar false vacuum state . in the original papers this initial condition was motivated by the fact that , in many quantum field theories , the false vacuum resulted naturally from the supercooling of an initially hot state in thermal equilibrium .it was soon found , however , that quantum fluctuations in the rolling inflaton field give rise to density perturbations in the universe , and that these density perturbations would be much larger than observed unless the inflaton field is very weakly coupled . for such weak coupling there would be no time for an initially nonthermal state to reach thermal equilibrium .nonetheless , since thermal equilibrium describes a probability distribution in which all states of a given energy are weighted equally , the fact that thermal equilibrium leads to a false vacuum implies that there are many ways of reaching a false vacuum .thus , even in the absence of thermal equilibrium even if the universe started in a highly chaotic initial state it seems reasonable to assume that some small patches of the early universe settled into the false vacuum state , as was suggested for example in ref .linde pointed out that even highly improbable initial patches could be important if they inflated , since the exponential expansion could still cause such patches to dominate the volume of the universe .one might hope ultimately to calculate the probability of regions settling into the false vacuum from a quantum description of cosmogenesis , but i will argue in sec .[ implications ] that this probability is quite irrelevant in the context of eternal inflation .once a region of false vacuum materializes , the physics of the subsequent evolution is rather straightforward .the gravitational repulsion caused by the negative pressure will drive the region into a period of exponential expansion . if the energy density of the false vacuum is at the grand unified theory scale ( , eq .( [ eq:6 ] ) shows that the time constant of the exponential expansion would be about sec .for inflation to achieve its goals , this patch has to expand exponentially for at least 60 e - foldings .then , because the false vacuum is only metastable ( the inflaton field is perched on top of the hill of the potential energy diagram of fig . [ newinf ] ) , eventually it will decay .the inflaton field will roll off the hill , ending inflation . when it does , the energy density that has been locked in the inflaton field is released . because of the coupling of the inflaton to other fields , that energy becomes thermalized to produce a hot soup of particles , which is exactly what had always been taken as the starting point of the standard big bang theory before inflation was introduced . from here on the scenario joins the standard big bang description .the role of inflation is to establish dynamically the initial conditions which otherwise have to be postulated .the inflationary mechanism produces an entire universe starting from essentially nothing , so one needs to answer the question of where the energy of the universe comes from .the answer is that it comes from the gravitational field .the universe did not begin with this colossal energy stored in the gravitational field , but rather the gravitational field can supply the energy because its energy can become negative without bound . as more and more positive energy materializes in the form of an ever - growing region filled with a high - energy scalar field , more and more negative energy materializes in the form of an expanding region filled with a gravitational field .the total energy remains constant at some very small value , and could in fact be exactly zero .there is nothing known that places any limit on the amount of inflation that can occur while the total energy remains exactly zero .chaotic inflation can occur in the context of a more general class of potential energy functions .in particular , even a potential energy function as simple as eq .( [ eq:1])describing a scalar field with a mass and no interaction is sufficient to describe chaotic inflation .chaotic inflation is illustrated in fig .[ chaoticinf ] . in this casethere is no state that bears any obvious resemblance to the false vacuum of new inflation .instead the scenario works by supposing that chaotic conditions in the early universe produced one or more patches in which the inflaton field was at some high value on the potential energy curve .inflation occurs as the inflaton field rolls down the hill .as long as the initial value is sufficiently large , there will be sufficient inflation to solve all the problems that inflation is intended to solve .the equations describing chaotic inflation can be written simply , provided that we assume that the universe is already flat enough so that we do not need to include a curvature term .the field equation for the inflaton field in the expanding universe is where the overdot denotes a derivative with respect to time , and is the time - dependent hubble parameter given by for the toy - model potential energy of eq .( [ eq:1 ] ) , these equations have a very simple solution : one can then calculate the number of inflationary e - foldings , which is given by in this toy model depends only on and not on the inflaton mass .thus the number of e - foldings will exceed 60 provided that where gev is the planck mass .although this is a super - planckian value for the scalar field , the energy density need not be super - planckian : for example , if gev , then the potential energy density is only .since it is presumably the energy density and not the value of the field that is relevant to gravity , it seems reasonable to assume that the chaotic inflation scenario will not be dramatically affected by corrections from quantum gravity .no matter which form of inflation we might envision , we would like to know what is the evidence that our universe underwent a period of inflation .the answer is pretty much the same no matter which form of inflation we are discussing . in my opinion ,the evidence that our universe is the result of some form of inflation is very solid . since the term _ inflation _ encompasses a wide range of detailed theories , it is hard to imagine any reasonable alternative .the basic arguments are as follows : 1 ._ the universe is big _+ first of all , we know that the universe is incredibly large : the visible part of the universe contains about particles . since we have all grown up in a large universe , it is easy to take this fact for granted : of course the universe is big , it s the whole universe ! in `` standard '' frw cosmology , without inflation ,one simply postulates that about or more particles were here from the start .however , in the context of present - day cosmology , many of us hope that even the creation of the universe can be described in scientific terms .thus , we are led to at least think about a theory that might explain how the universe got to be so big .whatever that theory is , it has to somehow explain the number of particles , or more .however , it is hard to imagine such a number arising from a calculation in which the input consists only of geometrical quantities , quantities associated with simple dynamics , and factors of 2 or . the easiest way by far to get a huge number , with only modest numbers as input , is for the calculation to involve an exponential .the exponential expansion of inflation reduces the problem of explaining particles to the problem of explaining 60 or 70 e - foldings of inflation .in fact , it is easy to construct underlying particle theories that will give far more than 70 e - foldings of inflation .inflationary cosmology therefore suggests that , even though the observed universe is incredibly large , it is only an infinitesimal fraction of the entire universe .2 . _ the hubble expansion _ + the hubble expansion is also easy to take for granted , since we have all known about it from our earliest readings in cosmology . in standard frw cosmology , the hubble expansion is part of the list of postulates that define the initial conditions .but inflation actually offers the possibility of explaining how the hubble expansion began .the repulsive gravity associated with the false vacuum is just what hubble ordered .it is exactly the kind of force needed to propel the universe into a pattern of motion in which each pair of particles is moving apart with a velocity proportional to their separation ._ homogeneity and isotropy _ + the degree of uniformity in the universe is startling .the intensity of the cosmic background radiation is the same in all directions , after it is corrected for the motion of the earth , to the incredible precision of one part in 100,000 .to get some feeling for how high this precision is , we can imagine a marble that is spherical to one part in 100,000 .the surface of the marble would have to be shaped to an accuracy of about 1,000 angstroms , a quarter of the wavelength of light .+ although modern technology makes it possible to grind lenses to quarter - wavelength accuracy , we would nonetheless be shocked if we unearthed a stone , produced by natural processes , that was round to an accuracy of 1,000 angstroms .if we try to imagine that such a stone were found , i am sure that no one would accept an explanation of its origin which simply proposed that the stone started out perfectly round .similarly , i do not think it makes sense to consider any theory of cosmogenesis that can not offer some explanation of how the universe became so incredibly isotropic .+ the cosmic background radiation was released about 300,000 years after the big bang , after the universe cooled enough so that the opaque plasma neutralized into a transparent gas .the cosmic background radiation photons have mostly been traveling on straight lines since then , so they provide an image of what the universe looked like at 300,000 years after the big bang .the observed uniformity of the radiation therefore implies that the observed universe had become uniform in temperature by that time . in standard frw cosmology, a simple calculation shows that the uniformity could be established so quickly only if signals could propagate at 100 times the speed of light , a proposition clearly contradicting the known laws of physics . in inflationary cosmology , however , the uniformity is easily explained .the uniformity is created initially on microscopic scales , by normal thermal - equilibrium processes , and then inflation takes over and stretches the regions of uniformity to become large enough to encompass the observed universe .4 . _ the flatness problem _ + i find the flatness problem particularly impressive , because of the extraordinary numbers that it involves .the problem concerns the value of the ratio where is the average total mass density of the universe and is the critical density , the density that would make the universe spatially flat .( in the definition of `` total mass density , '' i am including the vacuum energy associated with the cosmological constant , if it is nonzero . ) + there is general agreement that the present value of satisfies but it is hard to pinpoint the value with more precision . despite the breadth of this range , the value of at early times is highly constrained , since is an unstable equilibrium point of the standard model evolution .thus , if was ever _ exactly _ equal to one , it would remain exactly one forever .however , if differed slightly from one in the early universe , that difference whether positive or negative would be amplified with time .in particular , it can be shown that grows as at sec , for example , when the processes of big bang nucleosynthesis were just beginning , dicke and peebles pointed out that must have equaled one to an accuracy of one part in .classical cosmology provides no explanation for this fact it is simply assumed as part of the initial conditions . in the context of modern particle theory , where we try to push things all the way back to the planck time , sec , the problem becomes even more extreme .if one specifies the value of at the planck time , it has to equal one to 58 decimal places in order to be anywhere in the allowed range today . + while this extraordinary flatness of the early universe has no explanation in classical frw cosmology , it is a natural prediction for inflationary cosmology . during the inflationary period , instead of being driven away from one as described by eq .( [ eq:15 ] ) , is driven towards one , with exponential swiftness : where is the hubble parameter during inflation .thus , as long as there is a long enough period of inflation , can start at almost any value , and it will be driven to unity by the exponential expansion .absence of magnetic monopoles _+ all grand unified theories predict that there should be , in the spectrum of possible particles , extremely massive particles carrying a net magnetic charge . by combining grand unified theories with classical cosmology without inflation, preskill found that magnetic monopoles would be produced so copiously that they would outweigh everything else in the universe by a factor of about .a mass density this large would cause the inferred age of the universe to drop to about 30,000 years !inflation is certainly the simplest known mechanism to eliminate monopoles from the visible universe , even though they are still in the spectrum of possible particles .the monopoles are eliminated simply by arranging the parameters so that inflation takes place after ( or during ) monopole production , so the monopole density is diluted to a completely negligible level ._ anisotropy of the cosmic background radiation _ + the process of inflation smooths the universe essentially completely , but density fluctuations are generated as inflation ends by the quantum fluctuations of the inflaton field .generically these are adiabatic gaussian fluctuations with a nearly scale - invariant spectrum .new data is arriving quickly , but so far the observations are in excellent agreement with the predictions of the simplest inflationary models . for a review ,see for example bond and jaffe , who find that the combined data give a slope of the primordial power spectrum within 5% of the preferred scale - invariant value .the remainder of this article will discuss eternal inflation the questions that it can answer , and the questions that it raises . in this sectioni discuss the mechanisms that make eternal inflation possible , leaving the other issues for the following sections .i will discuss eternal inflation first in the context of new inflation , and then in the context of chaotic inflation , where it is more subtle .the eternal nature of new inflation was first discovered by steinhardt and vilenkin in 1983 .although the false vacuum is a metastable state , the decay of the false vacuum is an exponential process , very much like the decay of any radioactive or unstable substance .the probability of finding the inflaton field at the top of the plateau in its potential energy diagram does not fall sharply to zero , but instead trails off exponentially with time .however , unlike a normal radioactive substance , the false vacuum exponentially expands at the same time that it decays .in fact , in any successful inflationary model the rate of exponential expansion is always much faster than the rate of exponential decay .therefore , even though the false vacuum is decaying , it never disappears , and in fact the total volume of the false vacuum , once inflation starts , continues to grow exponentially with time , ad infinitum .[ eternalline ] shows a schematic diagram of an eternally inflating universe . the top bar indicates a region of false vacuum .the evolution of this region is shown by the successive bars moving downward , except that the expansion could not be shown while still fitting all the bars on the page .so the region is shown as having a fixed size in comoving coordinates , while the scale factor , which is not shown , increases from each bar to the next . as a concrete example , suppose that the scale factor for each bar is three times larger than for the previous bar . if we follow the region of false vacuum as it evolves from the situation shown in the top bar to the situation shown in the second bar , in about one third of the region the scalar field rolls down the hill of the potential energy diagram , precipitating a local big bang that will evolve into something that will eventually appear to its inhabitants as a universe .this local big bang region is shown in gray and labelled `` universe . ''meanwhile , however , the space has expanded so much that each of the two remaining regions of false vacuum is the same size as the starting region .thus , if we follow the region for another time interval of the same duration , each of these regions of false vacuum will break up , with about one third of each evolving into a local universe , as shown on the third bar from the top .now there are four remaining regions of false vacuum , and again each is as large as the starting region .this process will repeat itself literally forever , producing a kind of a fractal structure to the universe , resulting in an infinite number of the local universes shown in gray .there is no standard name for these local universes , but they are often called bubble universes .i prefer , however , to call them pocket universes , to avoid the suggestion that they are round .while bubbles formed in first - order phase transitions are round , the local universes formed in eternal new inflation are generally very irregular , as can be seen for example in the two - dimensional simulation by vanchurin , vilenkin , and winitzki in fig . 2 of ref . .the diagram in fig .[ eternalline ] is of course an idealization .the real universe is three dimensional , while the diagram illustrates a schematic one - dimensional universe .it is also important that the decay of the false vacuum is really a random process , while the diagram was constructed to show a very systematic decay , because it is easier to draw and to think about .when these inaccuracies are corrected , we are still left with a scenario in which inflation leads asymptotically to a fractal structure in which the universe as a whole is populated by pocket universes on arbitrarily small comoving scales .of course this fractal structure is entirely on distance scales much too large to be observed , so we can not expect astronomers to see it .nonetheless , one does have to think about the fractal structure if one wants to understand the very large scale structure of the spacetime produced by inflation .most important of all is the simple statement that once inflation happens , it produces not just one universe , but an infinite number of universes . the eternal nature of new inflation depends crucially on the scalar field lingering at the top of the plateau of fig .[ newinf ] . since the potential function for chaotic inflation , fig .[ chaoticinf ] , does not have a plateau , it is not obvious how eternal inflation can happen in this context .nonetheless , andrei linde showed in 1986 that chaotic inflation can also be eternal .the important point is that quantum fluctuations play an important role in all inflationary models .quantum fluctuations are invariably important on very small scales , and with inflation these very small scales are rapidly stretched to become macroscopic and even astronomical . thus the scalar field associated with inflation has very evident quantum effects .when the mass of the scalar field is small compared to the hubble parameter , these quantum effects are accurately summarized by saying that the quantum fluctuations cause the field to undergo a random walk .it is useful to divide space into regions of physical size , and to discuss the average value of the scalar field within a given region . in a time , the effect of the quantum fluctuations is equivalent to a random gaussian jump of zero mean and a root - mean - squared magnitude given by this random quantum jump is superimposed on the classical motion , as indicated in fig .( [ chaotic - eternal ] ) . to illustrate how eternal inflation happens in the simplest context ,let us consider again the free scalar field described by the potential function of eq .( [ eq:1 ] ) .we consider a region of physical radius , in which the field has an average value .using eq .( [ eq:9 ] ) along with eqs .( [ eq:8 ] ) and ( [ eq:1 ] ) , one finds that the magnitude of the classical change that the field will undergo in a time is given in by let denote the value of which is sufficiently large so that which can easily be solved to find now consider what happens to the region if its initial average value of is equal to . in a time interval ,the volume of the region will increase by . at the end of the time interval we can divide the original region into 20 regions of the same volume as the original , and in each regionthe average scalar field can be written as where denotes the random quantum jump , which is drawn from a gaussian probability distribution with standard deviation .gaussian statistics imply that there is a 15.9% chance that a gaussian random variable will exceed its mean by more than one standard deviation , and therefore there is a 15.9% chance that the net change in will be positive .since there are now 20 regions of the original volume , on average the value of will exceed the original value in 3.2 of these regions .thus the volume for which does not ( on average ) decrease , but instead increases by more than a factor of 3 . since this argument can be iterated , the expectation value of the volume for which increases exponentially with time .typically , therefore , inflation never ends , but instead the volume of the inflating region grows exponentially without bound .the minimum field value for eternal inflation is slightly below , since a volume increase by a factor of 3.2 is more than necessary any factor greater than one would be sufficient .a short calculation shows that the minimal value for eternal inflation is .while the value of is larger than , it is important to note that the energy density can still be much smaller than planck scale : which for gev gives an energy density of .if one repeats the argument with a potential function one finds that and since one requires to be very small in any case so that density perturbations are not too large , one finds again that eternal inflation is predicted to happen at an energy density well below the planck scale .in spite of the fact that the other universes created by eternal inflation are too remote to imagine observing directly , i nonetheless claim that eternal inflation has real consequences in terms of the way we extract predictions from theoretical models . specifically , there are three consequences of eternal inflation that i will discuss .first , eternal inflation implies that all hypotheses about the ultimate initial conditions for the universe such as the hartle - hawking no boundary proposal , the tunneling proposals by vilenkin or linde , or the more recent hawking - turok instanton become totally divorced from observation .that is , one would expect that if inflation is to continue arbitrarily far into the future with the production of an infinite number of pocket universes , then the statistical properties of the inflating region should approach a steady state which is independent of the initial conditions .unfortunately , attempts to quantitatively study this steady state are severely limited by several factors .first , there are ambiguities in defining probabilities , which will be discussed later . in addition, the steady state properties seem to depend strongly on super - planckian physics which we do not understand .that is , the same quantum fluctuations that make eternal chaotic inflation possible tend to drive the scalar field further and further up the potential energy curve , so attempts to quantify the steady state probability distribution require the imposition of some kind of a boundary condition at large .although these problems remain unsolved , i still believe that it is reasonable to assume that in the course of its unending evolution , an eternally inflating universe would lose all memory of the state in which it started . even if the universe forgets the details of its genesis , however , i would not assume that the question of how the universe began would lose its interest .while eternally inflating universes continue forever once they start , they are presumably not eternal into the past .( the word _ eternal _ is therefore not technically correct it would be more precise to call this scenario _ semi - eternal _ or _ future - eternal_. ) while the issue is not completely settled , it appears likely that eternally inflating universes must necessarily have a beginning .borde and vilenkin have shown , subject to various assumptions , that spacetimes that are future - eternal must have an initial singularity , in the sense that they can not be past null geodesically complete .the proof , however , requires the weak energy condition , which can be violated by quantum fluctuations . in any case, no one has constructed a viable model without a beginning , and certainly nothing that we know can rule out the possibility of a beginning .the possibility of a quantum origin of the universe is very attractive , and will no doubt be a subject of interest for some time .eternal inflation , however , seems to imply that the entire study will have to be conducted with literally no input from observation .a second consequence of eternal inflation is that the probability of the onset of inflation becomes totally irrelevant , provided that the probability is not identically zero .various authors in the past have argued that one type of inflation is more plausible than another , because the initial conditions that it requires appear more likely to have occurred . in the context of eternal inflation ,however , such arguments have no significance .to illustrate the insignificance of the probability of the onset of inflation , i will use a numerical example .we will imagine comparing two different versions of inflation , which i will call type a and type b. they are both eternally inflating but type a will have a higher probability of starting , while type b will be a little faster in its exponential expansion rate . since i am trying to show that the higher starting probability of type a is irrelevant , i will choose my numbers to be extremely generous to type a. first , we must choose a number for how much more probable it is for type a inflation to begin , relative to type b. a googol , , is usually considered a large number it is some 20 orders of magnitude larger than the total number of baryons in the visible universe .but i will be more generous : i will assume that type a inflation is more likely to start than type b inflation by a factor of .type b inflation , however , expands just a little bit faster , say by 0.001% .we need to choose a time constant for the exponential expansion , which i will take to be a typical grand unified theory scale , sec .( represents the time constant for the overall expansion factor , which takes into account both the inflationary expansion and the exponential decay of the false vacuum . ) finally , we need to choose a length of time to let the system evolve . in principlethis time interval is infinite ( the inflation is eternal into the future ) , but to be conservative we will follow the system for only one second .we imagine starting a statistical ensemble of universes at , with an expectation value for the volume of type a inflation exceeding that of type b inflation by . for brevity, i will use the term `` weight '' to refer to the ensemble expectation value of the volume .thus , at the weights of type a inflation and type b inflation will have the ratio after one second of evolution , the expansion factors for type a and type b inflation will be the weights at the end of one second are proportional to these expansion factors , so thus , the initial ratio of is vastly superseded by the difference in exponential expansion factors .in fact , we would have to calculate the exponent of eq .( [ eq:29 ] ) to an accuracy of 25 significant figures to be able to barely detect the effect of the initial factor of .one might criticize the above argument for being naive , as the concept of time was invoked without any specification of how the equal - time hypersurfaces are to be defined .i do not know a decisive answer to this objection ; as i will discuss later , there are unresolved questions concerning the calculation of probabilities in eternally inflating spacetimes .nonetheless , given that there is actually an infinity of time available , it is seems reasonable to believe that the form of inflation that expands the fastest will always dominate over the slower forms by an infinite factor .a corollary to this argument is that new inflation is not dead . while the initial conditions necessary for new inflationcan not be justified on the basis of thermal equilibrium , as proposed in the original papers , in the context of eternal inflation it is sufficient to conclude that the probability for the required initial conditions is nonzero .since the resulting scenario does not depend on the words that are used to justify the initial state , the standard treatment of new inflation remains valid .a third consequence of eternal inflation is the possibility that it offers to rescue the predictive power of theoretical physics . herei have in mind the status of string theory , or the theory known as m theory , into which string theory has evolved .the theory itself has an elegant uniqueness , but nonetheless it is not at all clear that the theory possesses a unique vacuum . since predictions will ultimately depend on the properties of the vacuum, the predictive power of string / m theory may be limited .eternal inflation , however , provides a possible mechanism to remedy this problem .even if many types of vacua are equally stable , it may turn out that one of them leads to a maximal rate of inflation .if so , then this type of vacuum will dominate the universe , even if its expansion rate is only infinitesimally larger than the other possibilities .thus , eternal inflation might allow physicists to extract unique predictions , in spite of the multiplicity of stable vacua .in an eternally inflating universe , anything that can happen will happen ; in fact , it will happen an infinite number of times .thus , the question of what is possible becomes trivial anything is possible , unless it violates some absolute conservation law . to extract predictions from the theory, we must therefore learn to distinguish the probable from the improbable .however , as soon as one attempts to define probabilities in an eternally inflating spacetime , one discovers ambiguities .the problem is that the sample space is infinite , in that an eternally inflating universe produces an infinite number of pocket universes .the fraction of universes with any particular property is therefore equal to infinity divided by infinity a meaningless ratio . to obtain a well - defined answer, one needs to invoke some method of regularization . to understand the nature of the problem , it is useful to think about the integers as a model system with an infinite number of entities .we can ask , for example , what fraction of the integers are odd .most people would presumably say that the answer is , since the integers alternate between odd and even . that is , if the string of integers is truncated after the , then the fraction of odd integers in the string is exactly if is even , and is if is odd . in any case , the fraction approaches as approaches infinity .however , the ambiguity of the answer can be seen if one imagines other orderings for the integers .one could , if one wished , order the integers as always writing two odd integers followed by one even integer .this series includes each integer exactly once , just like the usual sequence ( ) .the integers are just arranged in an unusual order . however , if we truncate the sequence shown in eq .( [ eq:30 ] ) after the entry , and then take the limit , we would conclude that 2/3 of the integers are odd .thus , we find that the definition of probability on an infinite set requires some method of truncation , and that the answer can depend nontrivially on the method that is used . in the case of eternally inflating spacetimes ,the natural choice of truncation might be to order the pocket universes in the sequence in which they form .however , we must remember that each pocket universe fills its own future light cone , so no pocket universe forms in the future light cone of another .any two pocket universes are spacelike separated from each other , so some observers will see one as forming first , while other observers will see the opposite .one can arbitrarily choose equal - time surfaces that foliate the spacetime , and then truncate at some value of , but this recipe is not unique .in practice , different ways of choosing equal - time surfaces give different results .if one chooses a truncation in the most naive way , one is led to a set of very peculiar results which i call the _ youngness paradox ._ specifically , suppose that one constructs a robertson - walker coordinate system while the model universe is still in the false vacuum ( de sitter ) phase , before any pocket universes have formed .one can then propagate this coordinate system forward with a synchronous gauge condition , in the direction normal to the hypersurface . ] and one can define probabilities by truncating at a fixed value of the synchronous time coordinate .that is , the probability of any particular property can be taken to be proportional to the volume on the hypersurface which has that property .this method of defining probabilities was studied in detail by linde , linde , and mezhlumian , in a paper with the memorable title `` do we live in the center of the world ? ''i will refer to probabilities defined in this way as synchronous gauge probabilities .the youngness paradox is caused by the fact that the volume of false vacuum is growing exponentially with time with an extraordinary time constant , in the vicinity of sec .since the rate at which pocket universes form is proportional to the volume of false vacuum , this rate is increasing exponentially with the same time constant .that means that in each second the number of pocket universes that exist is multiplied by a factor of . atany given time , therefore , almost all of the pocket universes that exist are universes that formed very very recently , within the last several time constants .the population of pocket universes is therefore an incredibly youth - dominated society , in which the mature universes are vastly outnumbered by universes that have just barely begun to evolve .although the mature universes have a larger volume , this multiplicative factor is of little importance , since in synchronous coordinates the volume no longer grows exponentially once the pocket universe forms .probability calculations in this youth - dominated ensemble lead to peculiar results , as discussed in ref .these authors considered the expected behavior of the mass density in our vicinity , concluding that we should find ourselves very near the center of a spherical low - density region . herei would like to discuss a less physical but simpler question , just to illustrate the paradoxes associated with synchronous gauge probabilities .specifically , i will consider the question : `` are there any other civilizations in the visible universe that are more advanced than ours ? '' .intuitively i would not expect inflation to make any predictions about this question , but i will argue that the synchronous gauge probability distribution strongly implies that there is no civilization in the visible universe more advanced than us .suppose that we have reached some level of advancement , and suppose that represents the minimum amount of time needed for a civilization as advanced as we are to evolve , starting from the moment of the decay of the false vacuum the start of the big bang .the reader might object on the grounds that there are many possible measures of advancement , but i would respond by inviting the reader to pick any measure she chooses ; the argument that i am about to give should apply to all of them .the reader might alternatively claim that there is no sharp minimum , but instead we should describe the problem in terms of a function which gives the probability that , for any given pocket universe , a civilization as advanced as we are would develop by time .i believe , however , that the introduction of such a probability distribution would merely complicate the argument , without changing the result .so , for simplicity of discussion , i will assume that there is some sharply defined minimum time required for a civilization as advanced as ours to develop .since we exist , our pocket universe must have an age satisfying suppose , however , that there is some civilization in our pocket universe that is more advanced than we are , let us say by 1 second . in that case eq .( [ eq:31 ] ) is not sufficient , but instead the age of our pocket universe would have to satisfy however , in the synchronous gauge probability distribution , universes that satisfy eq .( [ eq:32 ] ) are outnumbered by universes that satisfy eq .( [ eq:31 ] ) by a factor of approximately .thus , if we know only that we are living in a pocket universe that satisfies eq .( [ eq:31 ] ) , it is extremely improbable that it also satisfies eq .( [ eq:32 ] ) .we would conclude , therefore , that it is extraordinarily improbable that there is a civilization in our pocket universe that is at least 1 second more advanced than we are .perhaps this argument explains why seti has not found any signals from alien civilizations , but i find it more plausible that it is merely a symptom that the synchronous gauge probability distribution is not the right one .since the probability measure depends on the method used to truncate the infinite spacetime of eternal inflation , we are not forced to accept the consequences of the synchronous gauge probabilities .a very attractive alternative has been proposed by vilenkin , and developed further by vanchurin , vilenkin , and winitzki .the key idea of the vilenkin proposal is to define probabilities within a single pocket universe ( which he describes more precisely as a connected , thermalized domain ) .thus , unlike the synchronous gauge method , there is no comparison between old pocket universes and young ones . to justify this approachit is crucial to recognize that each pocket universe is infinite , even if one starts the model with a finite region of de sitter space .the infinite volume arises in the same way as it does for the special case of coleman - de luccia bubbles , the interior of which are open robertson - walker universes . from the outside oneoften describes such bubbles in a coordinate system in which they are finite at any fixed time , but in which they grow without bound . on the inside , however , the natural coordinate system is the one that reflects the intrinsic homogeneity , in which the space is infinite at any given time .the infinity of time , as seen from the outside , becomes an infinity of spatial extent as seen on the inside .thus , at least for continuously variable parameters , a single pocket universe provides an infinite sample space which can be used to define probabilities .the second key idea of vilenkin s method is to use the inflaton field itself as the time variable , rather than the synchronous time variable discussed in the previous section .this approach can be used , for example , to discuss the probability distribution for in open inflationary models , or to discuss the probability distribution for some arbitrary field that has a flat potential energy function .if , however , the vacuum has a discrete parameter which is homogeneous within each pocket universe , but which takes on different values in different pocket universes , then this method does not apply .the proposal can be described in terms of fig .[ vilenkin - space ] .we suppose that the theory includes an inflaton field of the new inflation type , and some set of fields which have flat potentials . the goal is to find the probability distribution for the fields .we assume that the evolution of the inflaton can be divided into three regimes , as shown on the figure . describes the eternally inflating regime , in which the evolution is governed by quantum diffusion . for , the evolutionis described classically in a slow - roll approximation , so that can be expressed as a function of . for inflationis over , and the field no longer plays an important role in the evolution .the fields are assumed to have a finite range of values , such as angular variables , so that a flat probability distribution is normalizable .they are assumed to have a flat potential energy function for , so that they could settle at any value .they are also assumed to have a flat potential energy function for , although they might interact with during the slow - roll regime , however , so that they can affect the rate of inflation .since the potential for the is flat for , we can assume that they begin with a flat probability distribution on the hypersurface .if the kinetic energy function for the is of the standard form , we take .if , however , the kinetic energy is nonstandard , as is plausible for a field described in angular variables , then the initial probability distribution is assumed to take the reparameterization - invariant form during the slow - roll era , it is assumed that the fields evolve classically , so one can calculate the number of e - folds of inflation as a function of the final value of the ( i.e. , the value of on the hypersurface ) .one can also calculate the final values in terms of the initial values ( i.e. , the value of on the hypersurface ) .one then assumes that the probability density is enhanced by the volume inflation factor , and that the evolution from to results in a jacobian factor . the ( unnormalized ) final probability distribution is thus given by alternatively , if the evolution of the during the slow - roll era is subject to quantum fluctuations , ref . shows how to write a fokker - planck equation which is equivalent to averaging the result of eq .( [ eq:35 ] ) over a collection of paths that result from interactions with a noise term .the vilenkin proposal sidesteps the youngness paradox by defining probabilities by the comparison of volumes within one pocket universe .the youngness paradox , in contrast , arose when one considered a probability ensemble of all pocket universes at a fixed value of the synchronous gauge time coordinate an ensemble that is overwhelmingly dominated by very young pocket universes .the proposal has the drawback , however , that it can not be used to compare the probabilities of discretely different alternatives . furthermore , although the results of this method seem reasonable , i do not at this point find them compelling .that is , it is not clear what principles of physics or probability theory ensure that this particular method of regularizing the spacetime is the one that leads to correct predictions .perhaps there is no way to answer this question , so we may be forced to accept this proposal , or something similar to it , as a postulate .in this paper i have summarized the workings of inflation , and the arguments that strongly suggest that our universe is the product of inflation .i argued that inflation can explain the size , the hubble expansion , the homogeneity , the isotropy , and the flatness of our universe , as well as the absence of magnetic monopoles , and even the characteristics of the nonuniformities .the detailed observations of the cosmic background radiation anisotropies continue to fall in line with inflationary expectations , and the evidence for an accelerating universe fits well with the inflationary preference for a flat universe .next i turned to the question of eternal inflation , claiming that essentially all inflationary models are eternal . in my opinionthis makes inflation very robust : if it starts anywhere , at any time in all of eternity , it produces an infinite number of pocket universes .eternal inflation has the very attractive feature , from my point of view , that it offers the possibility of allowing unique predictions even if the underlying string theory does not have a unique vacuum .i have also emphasized , however , that there are important problems in understanding the implications of eternal inflation .first , there is the problem that we do not know how to treat the situation in which the scalar field climbs upward to the planck energy scale .second , the definition of probabilities in an eternally inflating spacetime is not yet a closed issue , although important progress has been made . and third , i might add that the entire present approach is at best semiclassical .a better treatment may not be possible until we have a much better handle on quantum gravity , but eventually this issue will have to be faced .the author particularly thanks andrei linde , alexander vilenkin , neil turok , and other participants in the isaac newton institute programme _ structure formation in the universe _ for very helpful conversations .this work is supported in part by funds provided by the u.s .department of energy ( d.o.e . ) under cooperative research agreement # df - fc02 - 94er40818 , and in part by funds provided by nm rothschild & sons ltd and by the epsrc .ah . guth , _d * 23 * , 347356 ( 1981 ) .starobinsky , _ _ zh . eksp .. fiz . * 30 * , 719 ( 1979 ) [ _ _ jetp lett . * 30 * , 682 ( 1979 ) ] ; aa .starobinsky , _ _ phys . lett . *91b * , 99102 ( 1980 ) .s. coleman , _ _ physd * 15 * , 29292936 ( 1977 ) [ see errata * 16 * , 1248 ( 1977 ) ] ; c .callan and s. coleman , _ _ phys .d * 16 * , 17621768 ( 1977 ) .hawking , i .moss , and jm .stewart , _ _ phys .d * 26 * , 26812693 ( 1982 ) . ah .guth and ej .weinberg , _b212 * , 321364 ( 1983 ) .linde , _ _ phys .108b * , 389393 ( 1982 ) .a. albrecht and pj .steinhardt , _ _ phys .lett . * 48 * , 12201223 ( 1982 ) .linde , _ _ zh . eksp .teor . fiz . *38 * , 149151 ( 1983 ) [ _ _ jetp lett .* 38 * , 176179 ( 1983 ) ] ; ad .linde , _ _ phys .129b * , 177181 ( 1983 ) . ad .linde , _ _ phys .b259 * , 3847 ( 1991 ) .ar . liddle and dh .lyth , _ _ phys231 * , 1105 ( 1993 ) , astro - ph/9303019 .linde , _ _ phys .d * 49 * , 748754 ( 1994 ) , astro - ph/9307002 .copeland , ar . liddle , dh .lyth , ed .stewart , and d. wands , _ _ phys .d * 49 * , 6410 - 6433 ( 1994 ) , astro - ph/9401011 .e. stewart , _ _ phys .b345 * , 414415 ( 1995 ) , astro - ph/9407040 .l. randall , m. soljai , and ah .b472 * , 377408 ( 1996 ) , hep - ph/9512439 ; also hep - ph/9601296 .l . jensen and ja .stein - schabes , _ _ physd * 35 * , 11461150 ( 1987 ) , and references therein .starobinsky , _ _ phys117b * , 175178 ( 1982 ) . ah. guth and s .- y .pi , _ _ phys* 49 * , 11101113 ( 1982 ) .sw . hawking , _ _ phys . lett . *115b * , 295297 ( 1982 ) .bardeen , pj .steinhardt , and ms .turner , _ _ physd * 28 * , 679693 ( 1983 ) . ah .r. soc . lond .a * 307 * , 141148 ( 1982 ) .dicke and pj .peebles , in * general relativity : an einstein centenary survey * , eds : sw . hawking and w. israel ( cambridge university press , 1979 ) .preskill , _ _ phys .lett . * 43 * , 13651368 ( 1979 ) .bond and ah .jaffe , talk given at royal society meeting on * the development of large scale structure in the universe , * london , england , 25 - 26 mar 1998 , submitted to _ _ phil . trans .a , astro - ph/9809043 .steinhardt , in * the very early universe * , proceedings of the nuffield workshop , cambridge , 21 june 9 july , 1982 , eds : gw . gibbons , sw . hawking , and st .siklos ( cambridge university press , 1983 ) , pp. 251266 .a. vilenkin , _ _ physd * 27 * , 28482855 ( 1983 ) . ah .guth and s .- y .pi , _ _ physd * 32 * , 18991920 ( 1985 ) .s. coleman & f. de luccia , _ _ phys .d * 21 * , 33053315 ( 1980 ) .v. vanchurin , a. vilenkin , & s. winitzki , gr - qc/9905097 .m. aryal and a. vilenkin , _ _ phys . lett . *199b * , 351357 ( 1987 ) .linde , _ _ mod .phys . lett .* a1 * , 81 ( 1986 ) ; ad .linde , _ _ phys .175b * , 395400 ( 1986 ) ; as . goncharov , ad .linde , and vf .mukhanov , _ _ inta2 * , 561591 ( 1987 ) .a. vilenkin and lh .ford , _ _ physd * 26 * , 12311241 ( 1982 ) .linde , _ _ phys .b116 * , 335 ( 1982 ) .a. starobinsky , in * field theory , quantum gravity and strings * , eds : h.j .de vega & n. snchez , _ _ lecture notes in physics ( springer verlag ) vol .246 , pp . 107126( 1986 ) .. hartle & sw . hawking , _ _ phys .d * 28 * , 29602975 ( 1983 ) .a. vilenkin , _ _ phys .d * 30 * , 509511 ( 1984 ) ; a. vilenkin , _ _ phys .d * 33 * , 35603569 ( 1986 ) ; a. vilenkin , gr - qc/9812027 , to be published in * proceedings of cosmo 98 * , monterey , ca , 15 - 20 november , 1998 . ad .linde , _ _ nuovo cim . * 39 * , 401405 ( 1984 ) ; ad .linde , _ _ phys . rev .d * 58 * , 083514 ( 1998 ) , gr - qc/9802038 .hawking & n .turok , _ _ phys* b425 * , 2532 ( 1998 ) , hep - th/9802030 .a. linde , d. linde , & a. mezhlumian , _ _ phys .d * 49 * , 17831826 ( 1994 ) , gr - qc/9306035 .j. garcia - bellido & a. linde , _ _ phys .d * 51 * , 429443 ( 1995 ) , hep - th/9408023 .a. borde & a. vilenkin , _ _ phys .lett . * 72 * , 33053309 ( 1994 ) , gr - qc/9312022 .a. borde & a. vilenkin , _ _ phys .d * 56 * , 717723 ( 1997 ) , gr - qc/9702019 .a. linde , d. linde , & a. mezhlumian , _ _ phys .b345 * , 203210 ( 1995 ) , hep - th/9411111 .
|
the basic workings of inflationary models are summarized , along with the arguments that strongly suggest that our universe is the product of inflation . the mechanisms that lead to eternal inflation in both new and chaotic models are described . although the infinity of pocket universes produced by eternal inflation are unobservable , it is argued that eternal inflation has real consequences in terms of the way that predictions are extracted from theoretical models . the ambiguities in defining probabilities in eternally inflating spacetimes are reviewed , with emphasis on the youngness paradox that results from a synchronous gauge regularization technique . vilenkin s proposal for avoiding these problems is also discussed .
|
anaphora resolution is crucial in natural language processing ( nlp ) , specifically , discourse analysis . in the case of english ,partially motivated by message understanding conferences ( mucs ) , a number of coreference resolution methods have been proposed . in other languages such as japanese and spanish , anaphoricexpressions are often omitted .ellipses related to obligatory cases are usually termed zero pronouns . sincezero pronouns are not expressed in discourse , they have to be detected prior to identifying their antecedents .thus , although in english pleonastic pronouns have to be determined whether or not they are anaphoric expressions prior to resolution , the process of analyzing japanese zero pronouns is different from general coreference resolution in english . for identifying anaphoric relations ,existing methods are classified into two fundamental approaches : rule - based and statistical approaches . in rule - based approaches , anaphoric relations between anaphors and their antecedents are identified by way of hand - crafted rules , which typically rely on syntactic structures , gender / number agreement , and selectional restrictions .however , it is difficult to produce rules exhaustively , and rules that are developed for a specific language are not necessarily effective for other languages .for example , gender / number agreement in english can not be applied to japanese .statistical approaches use statistical models produced based on corpora annotated with anaphoric relations .however , only a few attempts have been made in corpus - based anaphora resolution for japanese zero pronouns .one of the reasons is that it is costly to produce a sufficient volume of training corpora annotated with anaphoric relations .in addition , those above methods focused mainly on identifying antecedents , and few attempts have been made to detect zero pronouns .motivated by the above background , we propose a probabilistic model for analyzing japanese zero pronouns combined with a detection method . in brief , our model consists of two parameters associated with zero pronoun detection and antecedent identification .we focus on zero pronouns whose antecedents exist in preceding sentences to zero pronouns because they are major referential expressions in japanese .section [ sec : proposed approach ] explains our proposed method ( system ) for analyzing japanese zero pronouns .section [ sec : evaluation ] evaluates our method by way of experiments using newspaper articles .section [ sec : related works ] discusses related research literature .figure [ fig : overview ] depicts the overall design of our system to analyze japanese zero pronouns .we explain the entire process based on this figure . first , given an input japanese text , our system performs morphological and syntactic analyses . in the case of japanese ,morphological analysis involves word segmentation and part - of - speech tagging because japanese sentences lack lexical segmentation , for which we use the juman morphological analyzer .then , we use the knp parser to identify syntactic relations between segmented words .second , in a zero pronoun detection phase , the system uses syntactic relations to detect omitted cases ( nominative , accusative , and dative ) as zero pronoun candidates . to avoid zero pronouns overdetected, we use the ipal verb dictionary including case frames associated with 911 japanese verbs .we discard zero pronoun candidates unlisted in the case frames associated with a verb in question . for verbs unlisted in the ipal dictionary ,only nominative cases are regarded as obligatory .the system also computes a probability that case related to target verb is a zero pronoun , , to select plausible zero pronoun candidates .ideally , in the case where a verb in question is polysemous , word sense disambiguation is needed to select the appropriate case frame , because different verb senses often correspond to different case frames .however , we currently merge multiple case frames for a verb into a single frame so as to avoid the polysemous problem .this issue needs to be further explored .third , in a zero pronoun resolution ( i.e. , antecedent identification ) phase , for each zero pronoun the system extracts antecedent candidates from the preceding contexts , which are ordered according to the extent to which they can be the antecedent for the target zero pronoun . from the viewpoint of probability theory , our task here is to compute a probability that zero pronoun refers to antecedent , , and select the candidate that maximizes the probability score . for the purpose of computing this score , we model zero pronouns and antecedents in section [ sec : features ] . finally , the system outputs texts containing anaphoric relations . in addition, the number of zero pronouns analyzed by the system can optionally be controlled based on the certainty score described in section [ sec : certainty ] . according to past literature associated with zero pronoun resolution and our preliminary study , we use the following six features to model zero pronouns and antecedents . features for zero pronouns * verbs that govern zero pronouns ( ) , which denote verbs whose cases are omitted .* surface cases related to zero pronouns ( ) , for which possible values are japanese case marker suffixes , _ ga _ ( nominative ) , _ wo _ ( accusative ) , and _ ni _ ( dative ) . those values indicate which cases are omitted . features for antecedents * post - positional particles ( ) , which play crucial roles in resolving japanese zero pronouns .* distance ( ) , which denotes the distance ( proximity ) between a zero pronoun and an antecedent candidate in an input text . in the case where they occur in the same sentence, its value takes . in the case where an antecedent occurs in sentences previous to the sentenceincluding a zero pronoun , its value takes . *constraint related to relative clauses ( ) , which denotes whether an antecedent is included in a relative clause or not . in the casewhere it is included , the value of takes _ true _ , otherwise _false_. the rationale behind this feature is that japanese zero pronouns tend _ not _ to refer to noun phrases in relative clauses . * semantic classes ( ) , which represent semantic classes associated with antecedents .we use 544 semantic classes defined in the japanese _ bunruigoihyou _ thesaurus , which contains 55,443 japanese nouns .we consider probabilities that unsatisfied case related to verb is a zero pronoun , , and that zero pronoun refers to antecedent , .thus , a probability that case ( ) is zero - pronominalized and refers to candidate is formalized as in equation ( [ eq : product ] ) . here , and are computed in the detection and resolution phases , respectively ( see figure [ fig : overview ] ) .since zero pronouns are omitted obligatory cases , whether or not case is a zero pronoun depends on the extent to which case is obligatory for verb .case is likely to be obligatory for verb if frequently co - occurs with .thus , we compute based on the co - occurrence frequency of pairs , which can be extracted from unannotated corpora . takes 1 in the case where is ( nominative ) regardless of the target verb , because is obligatory for most japanese verbs .given the formal representation for zero pronouns and antecedents in section [ sec : features ] , the probability , , is expressed as in equation ( [ eq : paz1 ] ) . to improve the efficiency of probability estimation , we decompose the right - hand side of equation ( [ eq : paz1 ] ) as follows .since a preliminary study showed that and were relatively independent of the other features , we approximate equation ( [ eq : paz1 ] ) as in equation ( [ eq : paz2 ] ) . given that is independent of and , we can further approximate equation ( [ eq : paz2 ] ) to derive equation ( [ eq : paz ] ) . here , the first three factors , , are related to syntactic properties , and is a semantic property associated with zero pronouns and antecedents .we shall call the former and latter `` syntactic '' and `` semantic '' models , respectively .each parameter in equation ( [ eq : paz ] ) is computed as in equations ( [ eq : ppc ] ) , where denotes the frequency of in corpora annotated with anaphoric relations . however , since estimating a semantic model , , needs large - scale annotated corpora , the data sparseness problem is crucial .thus , we explore the use of unannotated corpora . for , and are features for a zero pronoun , and is a feature for an antecedent .however , we can regard , , and as features for a verb and its case noun because zero pronouns are omitted case nouns .thus , it is possible to estimate the probability based on co - occurrences of verbs and their case nouns , which can be extracted automatically from large - scale unannotated corpora .since zero pronoun analysis is not a stand - alone application , our system is used as a module in other nlp applications , such as machine translation . in those applications ,it is desirable that erroneous anaphoric relations are not generated .thus , we propose a notion of certainty to output only zero pronouns that are detected and resolved with a high certainty score .we formalize the certainty score , , for each zero pronoun as in equation ( [ eq : certainty ] ) , where and denote probabilities computed by equation ( [ eq : product ] ) for the first and second ranked candidates , respectively .in addition , is a parametric constant , which is experimentally set to . the certainty score becomes great in the case where is sufficiently great and significantly greater than .to investigate the performance of our system , we used _ kyotodaigaku _text corpus version 2.0 , in which 20,000 articles in _ mainichi shimbun _ newspaper articles in 1995 were analyzed by juman and knp ( i.e. , the morph / syntax analyzers used in our system ) and revised manually . from this corpus , we randomly selected 30 general articles ( e.g. , politics and sports ) and manually annotated those articles with anaphoric relations for zero pronouns .the number of zero pronouns contained in those articles was 449 .we used a leave - one - out cross - validation evaluation method : we conducted 30 trials in each of which one article was used as a test input and the remaining 29 articles were used for producing a syntactic model .we used six years worth of _ mainichi shimbun _newspaper articles to produce a semantic model based on co - occurrences of verbs and their case nouns . to extract verbs and their case noun pairs from newspaper articles, we performed a morphological analysis by juman and extracted dependency relations using a relatively simple rule : we assumed that each noun modifies the verb of highest proximity . as a result, we obtained 12 million co - occurrences associated with 6,194 verb types .then , we generalized the extracted nouns into semantic classes in the japanese _ bunruigoihyou _ thesaurus . in the case where a noun was associated with multiple classes ,the noun was assigned to all possible classes . in the case where a noun was not listed in the thesaurus ,the noun itself was regarded as a single semantic class . [ cols="^,>,<,^,^,>,<,^,^ " , ] fundamentally , our evaluation is two - fold : we evaluated only zero pronoun resolution ( antecedent identification ) and a combination of detection and resolution . in the former case, we assumed that all the zero pronouns are correctly detected , and investigated the effectiveness of the resolution model , . in the latter case, we investigated the effectiveness of the combined model , .first , we compared the performance of the following different models for zero pronoun resolution , : a semantic model produced based on annotated corpora ( ) , a semantic model produced based on unannotated corpora , using co - occurrences of verbs and their case nouns ( ) , a syntactic model ( ) , a combination of and ( ) , a combination of and ( ) , which is our complete model for zero pronoun resolution , a rule - based model ( ) . as a control ( baseline ) model , we took approximately two man - months to develop a rule - based model ( ) through an analysis on ten articles in _ kyotodaigaku _ text corpus .this model uses rules typically used in existing rule - based methods : 1 ) post - positional particles that follow antecedent candidates , 2 ) proximity between zero pronouns and antecedent candidates , and 3 ) conjunctive particles .we did not use semantic properties in the rule - based method because they decreased the system accuracy in a preliminary study .table [ tab : riyousosei ] shows the results , where we regarded the -best antecedent candidates as the final output and compared results for different values of . in the case where the correct answer was included in the -best candidates , we judged it correct .in addition , `` accuracy '' is the ratio between the number of zero pronouns whose antecedents were correctly identified and the number of zero pronouns correctly detected by the system ( 404 for all the models ) .bold figures denote the highest performance for each value of across different models . here , the average number of antecedent candidates per zero pronoun was 27 regardless of the model , and thus the accuracy was 3.7% in the case where the system randomly selected antecedents . looking at the results for two different semantic models , outperformed , which indicates that the use of co - occurrences of verbs and their case nouns was effective to identify antecedents and avoid the data sparseness problem in producing a semantic model .the syntactic model , , outperformed the two semantic models independently , and therefore the syntactic features used in our model were more effective than the semantic features to identify antecedents .when both syntactic and semantic models were used in , the accuracy was further improved .while the rule - based method , , achieved a relatively high accuracy , our complete model , , outperformed irrespective of the value of . to sum up, we conclude that both syntactic and semantic models were effective to identify appropriate anaphoric relations . at the same time , since our method requires annotated corpora , the relation between the corpus size and accuracy is crucial . thus , we performed two additional experiments associated with . in the first experiment, we varied the number of annotated articles used to produce a syntactic model , where a semantic model was produced based on six years worth of newspaper articles . in the second experiment, we varied the number of unannotated articles used to produce a semantic model , where a syntactic model was produced based on 29 annotated articles . in figure[ fig : both ] , we show two _ independent _ results as space is limited : the dashed and solid graphs correspond to the results of the first and second experiments , respectively .given all the articles for modeling , the resultant accuracy for each experiment was 50.7% , which corresponds to that for with in table [ tab : riyousosei ] . ) . ] in the case where the number of articles was varied in producing a syntactic model , the accuracy improved rapidly in the first five articles .this indicates that a high accuracy can be obtained by a relatively small number of supervised articles . in the case where the amount of unannotated corpora was varied in producing a semantic model ,the accuracy marginally improved as the corpus size increases .however , note that we do not need human supervision to produce a semantic model . finally , we evaluated the effectiveness of the combination of zero pronoun detection and resolution in equation ( [ eq : product ] ) . to investigate the contribution of the detection model , , we used for comparison . both cases used to compute the probability for zero pronoun resolution .we varied a threshold for the certainty score to plot coverage - accuracy graphs for zero pronoun detection ( figure [ fig : detection ] ) and antecedent identification ( figure [ fig : sikii ] ) . in figure[ fig : detection ] , `` coverage '' is the ratio between the number of zero pronouns correctly detected by the system and the total number of zero pronouns in input texts , and `` accuracy '' is the ratio between the number of zero pronouns correctly detected and the total number of zero pronouns detected by the system .note that since our system failed to detect a number of zero pronouns , the coverage could not be 100% .figure [ fig : detection ] shows that as the coverage decreases , the accuracy improved irrespective of the model used . when compared with the case of , our model , , achieved a higher accuracy regardless of the coverage . in figure [fig : sikii ] , `` coverage '' is the ratio between the number of zero pronouns whose antecedents were generated and the number of zero pronouns correctly detected by the system .the accuracy was improved by decreasing the coverage , and our model marginally improved the accuracy for . according to those above results ,our model was effective to improve the accuracy for zero pronoun detection and did not have side effect on the antecedent identification process .as a result , the overall accuracy of zero pronoun detection and resolution was improved .kim and ehara proposed a probabilistic model to resolve subjective zero pronouns for the purpose of japanese / english machine translation . in their model , the search scope for possible antecedents was limited to the sentence containing zero pronouns .in contrast , our method can resolve zero pronouns in both intra / inter - sentential anaphora types .aone and bennett used a decision tree to determine appropriate antecedents for zero pronouns .they focused on proper and definite nouns used in anaphoric expressions as well as zero pronouns .however , their method resolves only anaphors that refer to organization names ( e.g. , private companies ) , which are generally easier to resolve than our case . both above existing methodsrequire annotated corpora for statistical modeling , while we used corpora with / without annotations related to anaphoric relations , and thus we can easily obtain large - scale corpora to avoid the data sparseness problem .nakaiwa used japanese / english bilingual corpora to identify anaphoric relations of japanese zero pronouns by comparing j / e sentence pairs .the rationale behind this method is that obligatory cases zero - pronominalized in japanese are usually expressed in english .however , in the case where corresponding english expressions are pronouns and anaphors , their method is not effective .additionally , bilingual corpora are more expensive to obtain than monolingual corpora used in our method .finally , our method integrates a parameter for zero pronoun detection in computing the certainty score .thus , we can improve the accuracy of our system by discarding extraneous outputs with a small certainty score .we proposed a probabilistic model to analyze japanese zero pronouns that refer to antecedents in the previous context .our model consists of two probabilistic parameters corresponding to detecting zero pronouns and identifying their antecedents , respectively .the latter is decomposed into syntactic and semantic properties . to estimate those parameters efficiently , we used annotated / unannotated corpora .in addition , we formalized the certainty score to improve the accuracy . through experiments , we showed that the use of unannotated corpora was effective to avoid the data sparseness problem and that the certainty score further improved the accuracy .chinatsu aone and scott william bennett .1995 . evaluating automated and manual acquisition of anaphora resolution strategies . in _ proceedings of 33th annual meeting of the association for computational linguistics _ , pages 122129 .yeun - bae kim and terumasa ehara .zero - subject resolution method based on probabilistic inference with evaluation function . in _ proceedings of the 3rd natural language processing pacific - rim symposium _ , pages 721727 .sadao kurohashi and makoto nagao . 1998a .building a japanese parsed corpus while improving the parsing system . in _ proceedings of the 1st international conference on language resources & evaluation _ ,pages 719724 .ruslan mitkov , lamia belguith , and malgorzata stys .multilingual robust anaphora resolution . in _ proceedings of the 3rd conference on empirical methods in natural language processing _ , pages 716 .hiromi nakaiwa and satoshi shirai .anaphora resolution of japanese zero pronouns with deictic reference . in _ proceedings of the 16th international conference on computational linguistics _ ,pages 812817 .manabu okumura and kouji tamura .zero pronoun resolution in japanese discourse based on centering theory . in _ proceedings of the 16th international conference on computational linguistics _ , pages 871876 .manuel palomar , antonio ferrndez , lidia moreno , patricio martnez - barco , jess peral , maximiliano saiz - noeda , and rafael muoz .an algorithm for anaphora resolution in spanish texts ., 27(4):545568 .
|
this paper proposes a method to analyze japanese anaphora , in which zero pronouns ( omitted obligatory cases ) are used to refer to preceding entities ( antecedents ) . unlike the case of general coreference resolution , zero pronouns have to be detected prior to resolution because they are not expressed in discourse . our method integrates two probability parameters to perform zero pronoun detection and resolution in a single framework . the first parameter quantifies the degree to which a given case is a zero pronoun . the second parameter quantifies the degree to which a given entity is the antecedent for a detected zero pronoun . to compute these parameters efficiently , we use corpora with / without annotations of anaphoric relations . we show the effectiveness of our method by way of experiments .
|
high laser powers in scientific experiments and industrial applications often lead to the generation of thermal lenses in optical systems , causing unwanted changes in spatial beam profiles .absorption of high - power laser radiation in transmissive optical materials deposits heat in a non - uniform spatial distribution , leading to a temperature gradient and therefore refractive index gradient across the optic surface .the resulting thermal gradient index lens changes the spatial mode of the laser and can cause aberrations which lead to modal distortions .we report the development of an active device designed to counteract the effects of thermal lensing , first described in ref .four individually addressable heating elements are used to generate thermal gradients across a transmissive optical element , resulting in a controllable thermal lens having both spherical and ( if necessary ) cylindrical components , providing the capability to focus and steer the beam . in this articlewe demonstrate the use of this device as an actuator in a beam shaping feedback loop , operating over a wide non - linear range to correct for impulsive thermally generated beam distortions .while potential applications of such an adaptive device exist in areas of laser material processing , image processing , and optical displays , this work is focused mainly on applications to advanced ligo ( aligo ) , a high power laser interferometer which aims to detect gravitational waves from merging neutron stars , black holes and other astrophysical sources .the aligo pre - stabilized laser system generates a 200w laser beam at 1064 nm .this high power beam is expected to create thermal lenses in many of the optics comprising the detector . in turn, this thermal lensing causes a laser power dependence in the mode matching between the beam from the pre - stabilized laser and the various optical cavities present in the interferometer .in particular , the mode matching between the beam transmitted through the input mode cleaner and the main interferometer power recycling cavity is expected to be dependent on the laser power .a system which can be used to correct for the power dependence is therefore highly desirable in order to maintain good mode matching into the interferometer . for this application , where repositioning of lenses is not easily achieved due to the ultrahigh - vacuum environment , adaptive optical elements present the most practical solution for maintaining optimal beam parameters in the high laser power regime .the device presented here is not the first method developed to compensate thermal lenses ; there are a number of thermal compensation techniques , using for example negative thermo - optic coefficient materials , co laser heating and electrical heating of the optical elements , tunable liquid crystals or deformable mirrors . however , this device is vacuum compatible and high laser power compatible , as well as being versatile and reliable .vacuum and high laser power compatibility are essential for use of the device in aligo , where it may be employed within the ultrahigh - vacuum system and will be required to transmit the full science laser power .[ fig : heater ] the basic principle of the adaptive spatial mode control method is shown in fig .[ fig : conceptualdesign ] .the gaussian profile of the science laser beam creates a power - dependent thermal lens in a transmissive optic .a thermal compensation plate which can be heated from the outside with four individual heaters is placed after the transmissive optic .when all four heaters are heated equally , the defocusing thermal gradient index lens that is created in the compensation plate can compensate the focusing thermal lens generated in the transmissive optic .the use of four individual heaters instead of just one ring heater enables greater control over the heating profile , allowing for the correction of astigmatic thermal lenses and also providing beam steering capabilities . in practice, the thermal compensation system will have to correct for non - stationary thermal effects , as the circulating laser power may change during the operation of the interferometer .to this end , the thermal compensation should be applied as part of a feedback loop , where any unwanted changes in the spatial beam profile are detected , processed , and compensated automatically .the compensator was realized in our experiment using the four segmented heater ( fsh ) shown in figure [ fig : heater ] , and described in ref .sf57 glass was chosen for the substrate because of its large thermal expansion and thermo - refractive ( ) coefficients , which give the device a large dynamic range of achievable focal lengths .the four heating segments each consist of a resistive heater on kapton film with a resistance about of 25 , and are each supplied by a current source , delivering a maximum current of 1.5a .this setup allows us to tune the focal length in the range from effectively minus infinity to -10 m .[ fig : opticslayout ] a simplified experimental arrangement is shown in fig .[ fig : opticslayout ] .light from a 200mw nd : yag laser is passed through the first adaptive optic , referred to henceforth as the _ aberrator _ , which is used to simulate the thermal lensing effect which is to be compensated .the beam transmitted through the aberrator is then passed through the second adaptive optic , henceforth referred to as the _ compensator _ , which is used to compensate for the actions of the aberrator .since the laser power in the setup was not sufficient to produce a significant thermal lens , the process of thermal lensing was emulated in another way . initially the aberrator was strongly biased by radially heating , such as to produce a negative focal length lens . a reduction in the radial heating of the aberrator from this level , and hence reduction in the power of thermal lens produced , was used as an analogue for an increase in central heating that would be caused by a high power laser beam .the beam radius at the aberrator was about 1.5 mm ( 1/e intensity ) .a gigabit ethernet ccd camera was placed further downstream in order to monitor the beam transmitted through the aberrator and compensator .the spot size and lateral position of the beam were calculated from the ccd data , and used as a reference relative to which deviations could be measured and corrected . after processing the data , four output signals are generated ; the beam radius and centroid position on the horizontal and vertical axes .these signals are used to control the four heating voltages on the fsh in four directions . in order to control the heating - induced thermal lens profile of the compensator accurately , it was necessary first to characterize the response of the compensator to actuation of each of the four heating elements .differences in the mechanical contacts between the heaters and the glass , as well as between the heaters and the mount , can lead to different responses of the compensator to each heater . to map out the asymmetries in the compensator actuation , the beam first was centered on the ccd while all the heating elements were turned off .the voltage applied to the top heating element was steadily increased , and the other three voltages were adjusted in order to re - center the beam on the ccd .this measurement represents a relative calibration of the four different heaters , because unequal heating by the four heaters will lead to a change in the alignment of the transmitted beam .the bias voltages obtained when the top heating element was supplied with 5v were 4.96v , 5.44v and 6.29v for the left , right and bottom heaters respectively .the response of the optical elements to the applied heating is far from linear over their full actuation range . in order to use the full range in a feedback loop it was therefore necessary to make a look up table , which was used to adjust the feedback filters to each heater for each set of heater dc offset voltages .offset voltages were applied to each of the heaters in order to explore the measured beam size parameter space .these dc offsets were chosen such as to provide symmetric heating , as this was the actuation regime of primary interest in this experiment .the dc offsets required to reach specific regions within this space were recorded . at each of these points ,transfer functions from applied voltage to measured beam parameter deviation were measured for each heater .the applied signal amplitude for the transfer functions was .3v ; small enough to be within the linear range of the actuator . at 1mhzthis linear range corresponds to a beam width deviation of around 7 m and a beam centroid position deviation of around 3 m .each element in the look up table was then composed from the dc bias offset values required to bring the beam parameters _ close _ to the working point , and the transfer functions from applied signal to beam parameter deviation within the linear range in order to bring the beam parameters to the precise working point .figure [ fig : tfs ] shows the fitted 3-pole transfer function from each heater to the corresponding beam parameters for one set of dc voltage values from the look up table .+ [ fig : tfs ] these transfer functions were used to describe the system in term of its transfer matrix : \delta v_\mathrm{left } \\\delta v_\mathrm{right } \\ \delta v_\mathrm{top } \\\delta v_\mathrm{bottom } \end{bmatrix * } , \label{eqn : vbsp}\ ] ] where and represent beam width and position respectively , and the subscripts h and v represent horizontal and vertical directions respectively ; are the small actuation voltages ( not the bias voltages ) applied to the left , right , top and and bottom sections of the fsh ; and is the 4 transfer matrix .the control matrix for feedback to the compensator was obtained by inverting this transfer matrix .in this section we demonstrate feedback compensation for circularly symmetric thermal aberration , such as may be caused by absorption of high laser powers in transmissive optics under normal incidence .the aberrator was uniformly heated at each quadrant until thermal equilibrium was reached , and then the heating power was turned off .the left panel of fig .[ fig : timages ] shows the temperature profile of the aberrator at thermal equilibrium ; in the central part of the optic the induced temperature gradient creates a good approximation to a spherical thermal lens .figure [ fig : inloopmeasurements ] shows time domain measurements of all 4 of the measured beam parameters while the beam shaping feedback loop was closed .the horizontal and vertical widths of the beam are different since the available laser output beam is elliptical , and no attempt was made to circularize it . during the run , two impulsive aberrations were generated one at 90s ( symmetrically reducing the aberrator heating ) and one at 860s ( re - applying the aberrator heating ) . after each impulsive aberration, the parameters recovered to their set - point values within approximately 300s .[ fig : inloopmeasurements ] initially , as the aberrator cools down and its effective focal length becomes less negative , the beam parameters measured at the ccd begin to change .the beam sizes in both axes decrease at first , as expected due to the change in focal length .the beam spot position in both axes also begins to change ; this change may be caused either by imperfect centering of the beam on the aberrator , or by differences in the rate of heat loss from different areas of the aberrator optic .for example , one may expect a greater rate of heat loss through conduction in the the lower portion of the aberrator due to its mechanical contact with the steel post on which it is mounted .this will lead to a linear component in the thermal gradient across the optic , thus causing a shift in the transmitted beam position . as the beam parameters begin to change, a slow integrator in the feedback signal path determines and applies the bias voltages required to return the beam parameters close to the set - points . during this time a linear proportional - integral controller also applies small signals ( within the linear regime ) , filtered by the transfer matrix elements corresponding tothe closest current dc offset voltage look up table elements , to compensate for small deviations in each of the four beam parameters around the working point .the beam parameters return to the set - point values by around 300s after the impulsive aberration ; the compensation system has regained the starting beam parameters at the ccd , despite the continued presence of the aberration . in this sectionwe demonstrate the ability of the compensator to correct for astigmatic heating by applying heat in the vertical or horizontal axes only .this experiment was similar to the axisymmetric heating experiment , except that the heating was only applied to the aberrator across the horizontal axis , at the left and right heaters . in the initial statethe aberrator was heated astigmatically until it reached thermal equilibrium , producing the temperature profile shown in the right panel of fig .[ fig : timages ] .three impulsive astigmatic aberrations were then applied by stopping the heating after 420s , reapplying the heating after 1800s , and finally ceasing the heating after 2450s .[ fig : astigmaticcomp ] figure [ fig : astigmaticcomp ] shows the time series of the four measured beam parameters throughout the measurement period . as in fig .[ fig : inloopmeasurements ] , it can be seen that the control loop returns the beam sizes measured at the ccd to their initial set - point values before the impulsive aberrations were applied . however , it is also clear that the beam position along the vertical axis was not well controlled by the feedback loop .when the aberration is first applied the beam drifts lower , and this error is not corrected for by the compensator .this drift is most likely caused by the fact that the look up table was only measured for symmetric dc offset values , i.e. those used for symmetric compensation . as a result , it was decided to weight the feedback to beam size more heavily than the feedback to beam position for the astigmatic compensation case . if necessary , a separate look up table could be made for purely astigmatic compensation , and one may consider interpolating between this and the symmetric compensation look up table in order to cover the full range of compensation . in a realistic application of this device , however , alignment drifts such as those observed here can be compensated easily using movable mirrors . as with any feedback loop, it is important to ascertain the level of noise injected into the system as a consequence of implementing the loop . to this end, the standard deviations in beam parameters were calculated with and without actuation on the aberrator . in the state where neither the aberrator nor compensator were heated , the standard deviations in beam radius were 0.15% and 0.16% of the mean beam radius in the horizontal and vertical axes respectively , and the standard deviations in beam pointing angle were 0.27 and 0.57 for the horizontal and vertical axes . in the case where the aberrator was heated with the strong bias voltage , the corresponding standard deviations increased to 0.40% , 0.33% and 2.9 and 2.1 . hereit can be seen that even static actuation of the fshs leads to a significant increase in beam parameter noise .this is almost certainly a consequence of operating the devices in air ; the high temperatures at the element ( > 100 ) cause turbulent air flow in its vicinity resulting in increased jitter downstream .this increased jitter is expected to be strongly mitigated when the actuators are used in vacuum .it has been demonstrated that a segmented heating - compensating system can be used as an actuator in an active feedback loop to correct for time - dependent thermally - induced aberrations of optics .the compensation of both uniform and astigmatic aberrations was tested .the uniform aberration compensation was shown to return all measured beam parameters to their initial values within around 300s of the application of an impulsive thermal aberration .the astigmatic aberration compensation showed a similar performance for most of the measured beam parameters , although it was unable to compensate for a drift in the position of the beam in a direction orthogonal to the axis of compensation .this limitation could likely be improved by better beam centering on the optics or by implementing a look up table made specifically for astigmatic compensation , or could be corrected for by the use of steering mirrors in the beam path .the use of the actuators was shown to increase the standard deviation of the measured beam parameters , though this is almost certainly due to the effects of turbulent air flow around the fshs as a result of their high temperature relative to the environment .these actuators are primarily designed for in - vacuum operation , however , so the turbulence effect does not present a severe limitation on their usefulness .further studies into the vacuum operation of the actuators , and the impact of the actuators on the quality of the beam transmitted through them are recommended in order to determine the potential for this device to make an impact in achieving the high - power operation of aligo . in the future , it may also be interesting to investigate the application of the device presented here as an actuator in a control loop to stabilize mode matching into an interferometer , similar to that described in ref . .the authors acknowledge the encouragement of the ligo science collaboration . prof .prabir barooah is also thanked for his helpful discussions .this work is supported by the national science foundation under grant under grants phy-0855313 and and phy-12055fsh12 .this paper has been assigned the ligo document no .ligo - p1300045 .l. winkelmann , o. puncken , r. kluzik , c. veltkamp , p. kwee , j. poeld , c. bogan , b. willke , m. frede , j. neumann , p. wessels , and d. kracht .`` injection - locked single - frequency laser with an output power of 220w '' . , 102:529538 , 2011 .10.1007/s00340 - 011 - 4411 - 9 .g. mueller , q. shu , r. adhikari , d. b. tanner , d. reitze , d. sigg , n. mavalvala , and j. camp .`` determination and optimization of mode matching into optical cavities by heterodyne detection '' ., 25(4):266268 , feb 2000 .
|
a method for active control of the spatial profile of a laser beam using adaptive thermal lensing is described . a segmented electrical heater was used to generate thermal gradients across a transmissive optical element , resulting in a controllable thermal lens . the segmented heater also allows the generation of cylindrical lenses , and provides the capability to steer the beam in both horizontal and vertical planes . using this device as an actuator , a feedback control loop was developed to stabilize the beam size and position .
|
a new class of non - equilibrium particle systems of two species that interact with each other along a hypersurface is recently introduced in and .the primary goal is to understand the connection between the microscopic transports of positive and negative charges in solar cells and the electric current generated .however , these models are flexible and general enough to provide insight to a variety of natural phenomena , such as the population dynamics of two segregated species under competition .here is an informal description of the model introduced in .a solar cell is modeled by a domain in that is divided into two adjacent sub - domains and by an interface , a -dimensional lipschitz hypersurface .domains and represent the hybrid medium that confine the positive and the negative charges , respectively . an example to keep in mindis when and are two adjacent unit cubes .the interface is then .the particle system is indexed by , the initial number of positive and negative charges in each of and . at microscopic level ,the motion of positive and negative charges are modeled by independent reflected diffusions ( such as reflected brownian motions ) in and , respectively . besides, there is a harvest region that absorbs ( harvests ) charges , respectively , whenever it is being visited .furthermore , when two particles of different types are within a small distance , they disappear at a certain rate if the time of occurrence is an exponential random variable of parameter . in particular , the probability of occurrence in a short amount of time is , where as . ]this annihilation models the trapping , recombination and separation phenomena of the charges .we shall refer to the system just described as the _ annihilating diffusion model_. see figure [ fig : model3clean ] for an illustration .is the interface and are harvest sites , title="fig:"][fig : model3clean ] even though the boundary is fixed and there is no creation of particles , the interactions do affect the correlations among the particles : whether or not a positive particle disappears at a given time affects the empirical distribution of the negative particles , which in term affects that of the positive particles .this challenge is reflected by the non - linearity of the macroscopic limit .this challenge arises again in the study of its fluctuation limit , and it is further reflected by the boundary integral term in the covariance of the gaussian process ; see of theorem [ t : convergence_annihilatingsystem ] .in , we established a _ functional law of large numbers _ for the time trajectory of the particle densities .this is a first step in connecting the microscopic mechanism of the system with the macroscopic behaviors that emerge .more precisely , let be the pair of empirical measure of positive and negative charges at time .we showed that , under a suitable scaling and appropriate conditions on the initial configurations , the random pairs of measures converge in distribution , as , to a pair of deterministic measures which are absolutely continuous with respect to the lebesgue measure .furthermore , the densities with respect to lebesque measure satisfy a system of partial differential equations ( pdes ) that has coupled nonlinear boundary conditions on the interface ; see theorem [ t : conjecture_delta_n_clt ] .it is this nonlinear coupling effect near the interface that distinguishes this model from previously studied ones .the suitable scaling is of order and is rigorously formulated via the annihilation potential function in assumption [ a : the annihilation potentialclt ] . in the current work ,we look at a finer scale of the annihilating diffusion model and establish the _ functional central limit theorem _ in theorem [ t : convergence_annihilatingsystem ] .to focus on the fluctuation effect caused by the interaction on the interface , we assume the harvest sites are empty in this paper .the fluctuations of the empirical measures from their mean ( the coupled pdes ) is quantified by where is the integral of an observable ( or test function ) with respect to the measure .intuitively , if is an indicator function of a subset , then is the mass of particles in ( which is the number of particles in divided by ) . in this case, is the fluctuation of the mass of particles in at time .our main result in this paper , theorem [ t : convergence_annihilatingsystem ] , asserts that the fluctuation limit ( as ) is a continuous gaussian markov process whose covariance structure is explicitly characterized .roughly speaking , the limit solves a stochastic partial differential equation ( spde ) which is a nonlinear version of the langevin equation . as a preliminary step to understand the fluctuation for the annihilating diffusion model , we consider in a simpler single species model . in that paper ,the particles move as i.i.d .reflected brownian motions in a bounded domain and are killed by a singular time - dependent potential which concentrates on the boundary .this is motivated by observation that we can view the positive charges as reflected diffusions in subject to killings by a time - dependent random potential .the techniques developed in provides us with a functional analytic setting for our fluctuation processes and allow us to overcome some ( but not all ) challenges for the study of the fluctuation of the annihilating diffusion model .for the latter , we need two new ingredients , namely the _ asymptotic expansion of the correlation functions _ and the _ boltzman - gibbs principle_. more precisely , by generalizing the approach of p. dittrich , we show that the correlation functions have the decomposition where is the hydrodynamic limit of the interacting diffusion system , is an explicit function and is a term converging to zero as tends to infinity .see theorem [ t : asymptotic_nm_correlation_t ] for the precise statement .this result implies propagation of chaos and allows explicit calculations of the covariance of the fluctuation process .the proof of theorem [ t : asymptotic_nm_correlation_t ] is based on a comparison of the bbgky hierarchy satisfied by the correlation functions with two other approximating hierarchies . on the other hand , the boltzman - gibbs principle ,first formulated mathematically and proven for some zero range processes in equilibrium in , says that the fluctuation fields of non - conserved quantities change on a time scale much faster than the conserved ones , hence in a time integral only the component along those fields of conserved quantities survives .although this principle is proved to hold for a few non - equilibrium situations ( see and the references therein ) , it is not known whether it holds in general .the validity of the principle for our annihilating diffusion model is far from obvious , since there is no conserved quantity .an intuitive explanation for the validity here is as follow : assumption [ a : shrinkingrateclt ] guarantees that the interaction near changes the occupation number of the particles at a slower rate with respect to diffusion ( which conserves the particle number ) . in other words , the particle number is approximately conserved on the time scale that is relevant for the principle .hence we are not far away from equilibrium fluctuation .one of the earliest rigorous results on fluctuation limit was proven by it , who considered a system of independent and identically distributed ( i.i.d . ) brownian motions in and showed that the limit is a -valued gaussian process solving a langevin equation , where is the schwartz space of tempered distributions .fluctuations for interacting diffusions in are studied by various authors ; see for examples of gaussian fluctuations and for an example of non - gaussian fluctuations .sznitman studied the fluctuations of a conservative system of diffusions with normal reflected boundary conditions on smooth domains .it is well known that the correlation method works well for certain stochastic particle systems modeling the reaction - diffusion equation where is the reaction term , such as a polynomial in .see for the continuous setting in which particles are diffusions on the cube ^d ] . let be the normalized empirical measures for the annihilating diffusion system described in the introduction and rigorously constructed in .the main result of implies the following .[ t : conjecture_delta_n_clt ] * ( hydrodynamic limit ) * suppose assumptions [ a : settingclt ] to [ a : shrinkingratelln ] in the above hold . if in as , where , then ,\,m_+({\overline}{d}_+)\times m_+({\overline}{d}_-))\ ] ] for any , where is the probabilistic solution ( see remark [ rk : probabilisticsol_clt ] ) of the following coupled heat equations and with initial value , where is the inward conormal vector field of .[ rk : probabilisticsol_clt]the notion of probabilistic solution in theorem [ t : conjecture_delta_n_clt ] follows that in .precisely , is the unique element in satisfying \\ u_-(t , y ) & = \e^{y}\big[\,u^-_0(x^-_t)\,\exp{\big(-\int^t_0 ( \lambda\,u_+)(t - s , x^-_s)\,dl^-_s \big)}\,\big ] , \end{aligned}\right.\ ] ] where is the boundary local time of the reflected diffusion on the interface .the validity of the previous assertion can be verified by the same argument for proposition 2.19 in . in this chapter , always denote the probabilistic solution of the coupled pdes ( also known as hydrodynamic limit ) in theorem [ t : conjecture_delta_n_clt ] .our object of study in this paper is the * fluctuation process * defined by where and is the fluctuation field in as defined in ( [ def : fluctuationfield ] ) . *functional analytic framework : * as in , it is nontrivial to describe the state space of in which we have weak convergence . for this, we adopt the functional analytic setting developed in to each of and .let be a complete orthonormal system ( cons ) of in consisting of neumann eigenfunctions , and the eigenvalue corresponding to ( i.e. ) , with .moreover , for , let be the separable hilbert space with inner product constructed as in , which has cons .now for and , we define by where is the dual paring extending .equip with the inner product .then is a separable hilbert space which has cons and hence has norm given by \(i ) we do not lose any information ( in terms of finite dimensional distributions ) by considering rather than .this is because the distribution of is determined by that of where , and are arbitrary .( ii)as a matter of fact , is equal to the set of continuous linear functionals on , where is equipped with the natural linear structure inherited from . for a general bounded lipschitz domain , the weyl s asymptotic law for the neumann eigenvalues holds ( see ) .that is , the number of eigenvalues ( counting their multiplicities ) less than or equal to , denoted by , satisfies moreover , we have the following bounds for the eigenfunctions proved in ( * ? ? ? * lemma 2.2 ) : for some . for our fluctuation result ( theorem [ t : convergence_annihilatingsystem ] ) to hold , we need the following assumption on which is stronger than assumption [ a : shrinkingratelln ] . roughly speaking , we require to decrease at a slower rate so that the fluctuations in propagate through .this is a high density assumption for the particles .[ a : shrinkingrateclt](annihilation distance for functional clt ) converges to 0 as and ] provided that which is true if , using the weyl s law ( [ e : weyllaw_clt ] ) and the bound ( [ e : eigenfcnupperbound_clt ] ) .suppose the initial position of particles in each of are i.i.d with distribution .it is easy to check that if , then ; furthermore , where is the centered gaussian random variable in with covariance = \<\phi\,\psi,\,u^{\pm}_0\>_{\rho_{\pm } } - \<\phi , u^{\pm}_0\>_{\rho_{\pm}}\,\<\psi , u^{\pm}_0\>_{\rho_{\pm}}.\ ] ] here is the inner product of .the main goal of this paper is to show that the sequence of processes converges as , and to characterize the limit .before stating the fluctuation result , we first define an evolution operator ( see ) as follows : fix any and . consider the following system of backward heat equations for for with terminal data and with _ nonlinear and coupled _ boundary conditions : where is the hydrodynamic limit in theorem [ t : conjecture_delta_n_clt ] , is the inward unit normal of and is the indicator function on the interface .let be the solution for the existence and uniqueness of solution for ( [ e : fluctuationpde_limit ] ) in \times { \overline}{d}_+)\times c([0,t]\times { \overline}{d}_-) ] is uniquely determined . moreover, the coupled pde ( [ e : fluctuationpde_limit ] ) describes the ` transportation ' for the fluctuation limit , and defined above describes the ` driving noise ' .formally , ( [ e : fluctuationpde_limit ] ) is obtained from ( [ e : backwardcoupleeqtqn ] ) , and ( [ e : quadvarmn_clt ] ) is obtained from ( [ e : quadvarmn_clt_n ] ) , both by letting .as mentioned in remark [ rk : transportationzz ] , the limiting process is a gaussian .moreover , we obtain the following properties for the limiting process . [t : properties_limit_annihilatingsystem]*(properties of ) * the fluctuation limit in theorem [ t : convergence_annihilatingsystem ] is a continuous gaussian markov process which is uniquely determined in distribution , and has a version in ,\,\mathbf{h}_{-\alpha}) ] , ] ; see theorem [ t : tightness_z ] .3 . step 3 in ,\,\mathbf{h}_{-\alpha}) ] ; see lemma [ l : convergenceofunzn0 ] .step 5 in ,\,\mathbf{h}_{-\alpha}) ] ; see theorem [ t : bgprinciple ] .this rough outline is the same as that for the single species model in .in fact , with all the preliminary estimates in section 2 , and with the asymptotic expansion of the correlation functions ( theorem [ t : asymptotic_nm_correlation_t ] ) proved in section 3 , all the steps except step 2 and step 6 can be treated using the method in .some of the main efforts are directed toward step 2 and step 6 which require asymptotic analysis of the correlation functions ( section 6.2 ) and the generalized correlation functions ( section 6.3 ) respectively .* convention : * to avoid unnecessary complications , we assume , from now on , that and that the underlying motion of the particles are reflected brownian motions ( i.e. and are identity matrices ) .however , our arguments work for general symmetric reflected diffusions as in and for any continuous functions as in .when there is no danger of confusion , for each fixed , we write in place of for simplicity .the constant is always equal to .the minimal augmented filtration generated by the annihilating diffusion process will be abbreviated as .assumptions [ a : settingclt ] to [ a : shrinkingratelln ] are in force throughout the rest of the paper , and we will indicate explicitly whenever assumption [ a : shrinkingrateclt ] is invoked .it is well known ( cf . ) that the -reflected diffusion in definition [ def : reflecteddiffusion ] has a transition density with respect to the symmetrizing measure ( i.e. , ) satisfying , that is locally hlder continuous and hence , and that we have two - sided gaussian bounds : for any , there are constants such that for every \times { \overline}{d}\times { \overline}{d} ] and .hence the first inequality in lemma [ l : boundsforggg ] are established .the remaining inequalities in the lemma then follow by the same argument , using point - wise upper bound for and we obtained .it can be shown that converges uniformly on compact subsets of , and respectively , where .furthermore , the limit is the unique continuous solution to the following couple integral equations .we fix and consider the following coupled backward pde for , with neumann boundary conditions and terminal data : where is defined in ( [ e : coupledintegral_fn ] ) and .note that each of the two equations in ( [ e : backwardcoupleeqtqn ] ) is of the form where is a killing potential and ( not necessarily non - negative ) is an external perturbation .this is because we can rewrite [ prop : integral_backwardcoupleeqtqn ] for large enough , and .there is a unique element in \times { \overline}{d}_+)\times c([0,t]\times { \overline}{d}_-) ] which satisfies the following coupled integral equations : \,u_-(s+\theta)\,\big)(x)\,d\theta\\ v^-(s , y)&=&p^-_{t - s}\phi_-(y)-\dfrac{1}{2}\int_0^{t - s } { \cal g}^-_{\theta}\big(\,[v^+(s+\theta)+v^-(s+\theta)]\,u_+(s+\theta)\,\big)(y)\,d\theta,\end{aligned}\ ] ] where .moreover , has the following probabilistic representations : \\ v^-(s , y ) & = & \e\big[\phi_-(x^-_{t - s})e^{-\int_0^{t - s}u_+(s+r , x^-_r)dl^-_r}\\ & & \qquad -\int_0^{t - s } ( v^+ \cdot u_+)(s+\theta , x^-_{\theta})\,e^{-\int_0^{\theta}u_+(s+r , x^-_r)dl^-_r}\,dl^-_{\theta}\,\big|\,x^-_0=y\big],\end{aligned}\ ] ] where is the boundary local time of the rbm on .we call this the * probabilistic solution * of the coupled pde ( [ e : backwardcoupleeqtq ] ) with terminal data .we stress that the right hand side of the above formula is well - defined ; for instance , is well - defined since the value of at is picked up only when hits ( which is a subset of ) .[ def : q_ts_u_ts ] for and , we define to be the probabilistic solution given by proposition [ prop : integral_backwardcoupleeqtq ] . clearly , and for . to stress the dependence in , we sometimes write as for fixed .now we define , for and , recalling the probabilistic representations of and in proposition [ prop : integral_backwardcoupleeqtqn ] and proposition [ prop : integral_backwardcoupleeqtq ] respectively , we see that ( [ e : contractionqnq_annihilatingsystem ] ) follows from the non - negativity of and . to prove ( [ e : uconvergenceqnq_annihilatingsystem ] ) , we fix and let and for ] , then the above estimates , together with ( [ e : surface_integral_boundedness ] ) and ( [ e : contractionqnq_annihilatingsystem ] ) , implies that ,\ ] ] where and iterating ( [ e : bound_qtminusqs_annihilatingsystem ] ) , we have hence , }f(r ) \,\leq\,{\overline}{c}\,a.\ ] ] ( the case is trivial . )we can then extend ( [ e : bound_qtminusqs_annihilatingsystem2 ] ) to take care of the case .namely , by the evolution property and ( [ e : contractionqnq_annihilatingsystem ] ) , we have the proof is complete . [def : correlationfcn ] fix and consider the annihilating diffusion system . for and , we define the **-correlation function at time * * , , by \ ] ] for all , where is the number of particles alive at time in each of and is the number of permutations of objects chosen from objects with . intuitively , if we randomly pick and indistinguishable living particles in and respectively at time , then is the probability joint density function for their positions .note that is defined for almost all , and that it depends on both and the initial configurations .we will see , via the bbgky hierarchy ( [ e : bbgky_nm_correlation_t ] ) which will be proved below , that for .we can also replace by ( cf .dittrich and lang and xanh ) .this is because we are interested in the behavior of as , and for each fixed , it is natural , base on the annihilating random walk model in , to expect that we have * propagation of chaos * , which says that when the number of particles tends to infinity , the particles will appear to be independent from each other .more precisely , we expect to have this will be implied by a more exact asymptotic behavior of the , namely theorem [ t : asymptotic_nm_correlation_t ] , which is a key ingredient for the study of fluctuation .our method is motivated by the approach of .[ t : asymptotic_nm_correlation_t ] suppose that ( this implies that the particles are initially independently distributed ) and that . then for all , there exists and an integer that for and , the correlation function has decomposition with where apply dynkin s formula to ( see ( * ? ?* corollary 7.8 ) ) the functional \ ] ] yields where , are operators , , and are functions on defined by note that is a multiplication operator , so it is natural to denote to be the function .note also that the above is a finite sum since when .the system of equation ( [ e : bbgky_nm_correlation_t ] ) is usually called _bbgky hierarchy_. ) as the ` variation of constant ' and as the probabilistic solution ( cf .* proposition 2.19 ) ) for the following heat equation on with neumann boundary condition : ] since by assumption , by repeatedly iterating ( [ e : bbgky_nm_correlation_t ] ) and ( [ e : bbgky_nm_a_t ] ) , we have where and are the tree and the labels defined in subsection 3.5 in .note that is a sum of terms of multiple integrals .following subsection 3.5 , we simplify ( or telescope ) each integrand by chapman - kolmogorov equation , and then apply to obtain for all and , where and is a relabeled tree of defined in subsection 3.5 of .lemma 3.9 and lemma 3.10 in give where is an absolute constant .therefore we have for all and , where . 1 . ( * propagation of chaos * )suppose assumption [ a : shrinkingratelln ] holds .then for any and any , we have uniformly for ] .suppose ] with uniform norm .a direct calculation suggests that the norm of \,\right)\ ] ] blows up in the order of ( for ) , due to the fact that is of order .hence we need to look into the generalized correlation functions .note that by fubinni s theorem followed by the change of variable .hence lemma [ l : tightness_y_phi_2 ] is implied by where is defined in ( [ def : enmpq ] ) .the ideas is to first obtain a ` variation of constant ' formula for via the dynkin s formula ; then iterate the formula to obtain a series expansion of in terms of ; and finally estimate and each term of the series .fix , and , and write then ( [ e : bbgky_nmpq_correlation ] ) yields where , , and are operators defined before , acting on the variables . in other words , is the probabilistic solution of it can be shown ( see proposition 2.19 in for a proof ) that the following probabilistic representation holds true for : ,\ ] ] where , and is the rbm in starting at . from this, the triangle inequality and the non - negativity of , we have it then follows that almost everywhere in , we have now we iterate ( [ e : iterate_e_pq ] ) to obtain from this inequality and the triangle inequality , we have , for any , and , where the integral sign for is on the set , inductively , and are obtained from as follows : if , then for any , by definition [ def : generalizedcorrelationfcn ] we have \notag\\ & = & \,\dfrac{n^{(p+1)}n^{(q+1)}}{n^2n^{(p)}n^{(q)}}\,\int_{d_+^{p+1}\times d_-^{q+1}}\ell(x , y)\,\psi(\vec{z},\vec{w})\,f^{(p+1,q+1)}_{u}((x,\vec{z}),\,(y,\vec{w } ) ) \notag\\ & & \,+\,\dfrac{n^{(q+1)}}{n^2\,n^{(q)}}\,\sum_{i=1}^p\int_{d_+^{p}\times d_-^{q+1}}\ell(z_i , y)\,\psi(\vec{z},\vec{w})\,f^{(p , q+1)}_{u}(\vec{z},\,(y,\vec{w } ) ) \notag\\ & & \,+\,\dfrac{n^{(p+1)}}{n^2\,n^{(p)}}\,\sum_{j=1}^q \int_{d_+^{p+1}\times d_-^{q}}\ell(x , w_j)\,\psi(\vec{z},\vec{w})\,f^{(p+1,q)}_{u}((x,\vec{z}),\,\vec{w } ) \notag\\ & & \,+\,\dfrac{1}{n^2}\,\sum_{i=1}^p\sum_{j=1}^q \int_{d_+^{p}\times d_-^{q}}\ell(z_i , w_j)\,\psi(\vec{z},\vec{w})\,f^{(p , q)}_{u}(\vec{z},\,\vec{w}).\end{aligned}\ ] ] this connects to and we know more about the latter ( such as theorem [ t : asymptotic_nm_correlation_t ] ) .furthermore , we use the simple fact that where therefore , for any , we have we now put into inequality ( [ e : epq_0 ] ) for each that appears in ( [ e : tightness_y_phi_2_step2 ] ) at the end of step 2 .specifically , by ( [ e : tightness_y_phi_2_step2 ] ) and ( [ e : epq_0 ] ) respectively , we have \notag\\ & \leq & \sum_{m=0}^{\infty}\int_{t_2=0}^t\cdots\int_{t_{m+1}=0}^{t_m } \sum_{\vec{\theta}\in \,\mathbb{t}^{(1,1)}_m } \bigg[\,\sum_{i=1}^5 \theta_i^{l_m(\vec{\theta})}\big(\psi^{\vec{\theta}}\big)\,\bigg],\end{aligned}\ ] ] where in the first inequality , the integration over the variables is on where ; in the second inequality , is the -th term that appear on the rhs of ( [ e : epq_0 ] ) .we will estimate each of the five terms ( ) on the rhs of ( [ e : tightness_y_phi_2_step4 ] ) separately .the arguments are the same for all of them .we first consider the term for .this term is \notag\\ & = & \sum_{m=0}^{\infty}\int_{t_2=0}^t\cdots\int_{t_{m+1}=0}^{t_m } \sum_{\vec{\theta}\in \,\mathbb{t}^{(1,1)}_m } \bigg[\,\frac{(c_0\,c)^{m+4}\,u}{n\,\delta_n^{2d } } \int_{d_+^{p}\times d_-^{q}}\psi^{\vec{\theta}}(\vec{z},\vec{w})\,\bigg],\end{aligned}\ ] ] where we have used the fact that the sum of the two components of is ( i.e. ) . using the same argument of step 3 in the proof of theorem [ t : asymptotic_nm_correlation_t ] ,we have , for each , for , where .this inequality implies that ( [ e : tightness_y_phi_2_step4_2 ] ) is at most when and is large enough , where . for ,the term on the rhs of ( [ e : tightness_y_phi_2_step4 ] ) is equal to .\end{aligned}\ ] ] by the same argument as that for , we have , for each , where we have used the facts that and that .the extra factor in the second inequality comes from the number of children ( in ) for each leaf in .therefore , ( [ e : tightness_y_phi_2_step4_3 ] ) is at most finally , the term for can be compared to the term for directly , since under assumption [ a : shrinkingrateclt ] and hence we can ignore the factor .therefore , the term for is at most .the goal for this subsection is to prove the following lemma , which is an indicator of the validity of the boltzman - gibbs principle for our annihilating diffusion model .it is instructive to compare the statement of lemma [ l : bgprinciple ] below with that of lemma [ l : tightness_y_phi_2 ] .[ l : bgprinciple ] suppose assumption [ a : shrinkingrateclt ] holds . for any , there exists , and positive constants satisfying such that \,\big)\,ds\big|^2\,\big ] \leq c_n\,\|\varphi\|^2\,t^{3/2}\ ] ] whenever , for any and any bounded function on \times d_+\times d_- ] in the same way , then using the definition of and in ( [ e : abbreviate_alpha_beta ] ) , we can rewrite the integrand in ( [ e : lhs_bgprinciple ] ) as follows .-\e[\eta_u - \xi_u]\cdot\e[\eta_v - \xi_v]\\ & = & \int_{d_+^2\times d_-^2}\ell(x_1,y_1)\,\ell(x_2,y_2)\,\bigg\{\ , f^-_u(y_1)f^-_v(y_2)\,\big[f^{(10)(10)}_{u , v}(x_1,x_2)- f^{(10)}_u(x_1)f^{(10)}_v(x_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad + f^-_u(y_1)f^+_v(x_2)\,\big[f^{(10)(01)}_{u , v}(x_1,y_2)- f^{(10)}_u(x_1)f^{(01)}_v(y_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad + f^+_u(x_1)f^-_v(y_2)\,\big[f^{(01)(10)}_{u , v}(y_1,x_2)- f^{(01)}_u(y_1)f^{(10)}_v(x_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad + f^+_u(x_1)f^+_v(x_2)\,\big[f^{(01)(01)}_{u , v}(y_1,y_2)- f^{(01)}_u(y_1)f^{(01)}_v(y_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad -f^-_v(y_2)\,\big[f^{(11)(10)}_{u , v}((x_1,y_1),x_2)- f^{(11)}_u(x_1,y_1)f^{(10)}_v(x_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad -f^+_v(x_2)\,\big[f^{(11)(01)}_{u , v}((x_1,y_1),y_2)- f^{(11)}_u(x_1,y_1)f^{(01)}_v(y_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad -f^-_u(y_1)\,\big[f^{(10)(11)}_{u , v}(x_1,(x_2,y_2))- f^{(10)}_u(x_1)f^{(11)}_v(x_2,y_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad -f^+_u(x_1)\,\big[f^{(01)(11)}_{u , v}(y_1,(x_2,y_2))- f^{(01)}_u(y_1)f^{(11)}_v(x_2,y_2)\big ] \notag\\ & & \qquad \qquad \qquad\qquad\qquad + \,\big[f^{(11)(11)}_{u , v}((x_1,y_1),(x_2,y_2))- f^{(11)}_u(x_1,y_1)f^{(11)}_v(x_2,y_2)\big]\,\bigg\}. \notag\end{aligned}\ ] ] note that each of the nine terms can be written in terms of defined in ( [ def : enmpq ] ) , where .we split these nine terms into three groups , where consists of the first , third and fifth terms ; consists of the second , forth and sixth terms ; and consists of the last three terms .that is , and note that we can bound in terms of via ( [ e : iterate_e_pq_2 ] ) .consider the first among the three terms in with replaced .we apply ( [ e : fuufu ] ) to write in terms of plus a lower order term .this gives similarly , when , the second term and the third term in are , respectively , and now we add up the three equations above .the sum of the lower order terms is , by theorem [ t : asymptotic_nm_correlation_t ] or ( [ e : asymptotic_nm_correlation_t_1b ] ) , of order ( i.e. a term which tends to zero even if we multiply it by ) uniformly for ] , equal to observe that on the rhs of in ( [ e : integrand_lhs_gp3 ] ) , if we view and as fixed variables , then satisfies since satisfies ( [ e : bbgky_enmpq ] ) .that is , and solve the same hierarchy of equations , but the initial condition is of smaller order of magnitude , by the above cancelations . following the same argument that we used for lemma[ l : tightness_y_phi_2 ] , with in place of , while keeping track of these terms , we obtain whenever and . by the same argument , ( [ e : finalboundlambda3 ] ) holds with replaced by either or . with all the results developed in the previous sections ,the proof of theorem [ t : convergence_annihilatingsystem ] is ready to be presented in this section .recall steps 1 - 6 in the outline of proof at the end of section 5 .we will establish tightness of ( which is step 2 ) and then identify any subsequential limit through steps 1 , 3 , 4 , 5 and 6 .note that for steps 1 , 3 , 4 and 5 , we do not need to go into the analysis of correlation functions ; the results for these steps are for arbitrary time interval rather than for a short time interval as in steps 2 and 6 .the following is step 2 in the outline of proof for theorem [ t : convergence_annihilatingsystem ] . note that we do not need any estimate about the evolution systems and for this step .the key of the proof is lemma [ l : tightness_y_phi_2 ] .[ t : tightness_z ] ( step 2 : tightness ) suppose assumption [ a : shrinkingrateclt ] holds and .for any , there exists such that is tight in ,\,\mathbf{h}_{-\alpha}) ] . for this , it suffices to show is tight in ,\r) ] for all ] .similarly , we have \leq c\,\|\phi^-_k\|^2 ] for all ] to a continuous gaussian martingale with independent increments and covariance functional ( [ e : quadvarmn_clt ] ) .in fact , following the proof of ( * ? ? ? * theorem 4.6 ) , we obtain step 3 . [t : convergenceofm^n_annihilation ] ( step 3 ) when , the square - integrable martingale in theorem [ t:3.3_clt ] converges to in distribution in ,\mathbf{h}_{-\alpha}) ] ) lies within the class of integrands with respect to .furthermore , following the same proof for ( * ? ? ? * theorem 4.9 ) , we obtain the following . [t : bgprinciple](step 6 : boltzman - gibbs principle ) suppose and assumption [ a : shrinkingrateclt ] holds .for any , there exists such that ,\,\mathbf{h}_{-\alpha}),\ ] ] where , the operator is defined in ( [ def : u^n_ts ] ) , \,\big).\end{aligned}\ ] ] observe that ( we will need later in the proof ) guarantees , base on weyl s law ( [ e : weyllaw_clt ] ) and ( [ e : eigenfcnupperbound_clt ] ) , that using the definition of the norm is defined in ( [ e : norm_minusalpha_annihilatingsystem ] ) , the uniform bound ( [ e : contractionqnq_annihilatingsystem ] ) and lemma [ l : bgprinciple ] , we have the following : for any , there exists a constant , an integer and positive constants satisfying such that \leq c_n\,t^{3/2}\ ] ] whenever and .in particular , we have , for , }\e\big[\,\big|\int_0^t \mathbf{u}^n_{(t , s)}(\mathbf{b}^n_s\zz^n_s- k^n_s)\,ds\big|^2_{-\alpha}\,\big ] = 0.\ ] ] on other hand , the process is tight in ,\,\mathbf{h}_{-\alpha}) ] by theorem [ t : tightness_z ] , lemma [ l : convergenceofunzn0 ] and theorem [ t : convergencestochint_clt ] respectively , provided that .hence is tight in ,\,\mathbf{h}_{-\alpha})$ ] .now theorem [ t : bgprinciple ] follows from ( [ e : bgprinciple_3 ] ) .chen and w .- t .hydrodynamic limits and propagation of chaos for interacting random walks in domains .preprint , arxiv:1311.2325 .chen and w .- t . fan .systems of interacting diffusions with partial annihilations through membranes ._ to appear .w. grecksch and c. tudor ._ stochastic evolution equations : a hilbert space approach ._ akademie verlag , berlin , 1995 .p. gyrya and l. saloff - coste ._ neumann and dirichlet heat kernels in inner uniform domains ._ _ astrisque * 336 * _ ( 2011 ) , viii+144 pp .
|
we study fluctuations of the empirical processes of a non - equilibrium interacting particle system consisting of two species over a domain that is recently introduced in and establish its functional central limit theorem . this fluctuation limit is a distribution - valued gaussian markov process which can be represented as a mild solution of a stochastic partial differential equation . the drift of our fluctuation limit involves a new partial differential equation with nonlinear coupled term on the interface that characterized the hydrodynamic limit of the system . the covariance structure of the gaussian part consists two parts , one involving the spatial motion of the particles inside the domain and other involving a boundary integral term that captures the boundary interactions between two species . the key is to show that the boltzman - gibbs principle holds for our non - equilibrium system . our proof relies on generalizing the usual correlation functions to the join correlations at two different times . * ams 2000 mathematics subject classification * : primary 60f17 , 60k35 ; secondary 60h15 , 92d15 * keywords and phrases * : fluctuation , hydrodynamic limit , reflected diffusion , robin boundary condition , martingale , gaussian process , stochastic partial differential equation , correlation function , bbgky hierarchy , boltzman - gibbs principle
|
game theory is the systematic study of decision - making in strategic situations .its models are widely used in economics , political science , biology and computer science to capture the behavior of individual participants in conflict and competition situations .the field attempts to describe how decision makers do and should interact within a well - defined system of rules to maximize their payoff .the kind of games we will be considering here is called minority games and arises in situations when a group of non communicating agents has to independently choose between two different choices and .payoff > of 1/8 to each . generally a game is defined as a set , where denotes the number of players , the set of available strategies of player , and the payoffs of different game outcomes .for quantum games , we add the associated hilbert space , generally of dim , and the initial state . in a quantum game, the choice of strategy translates to choosing a unitary operator , which is applied locally on the qubit held by the player .the games will be analyzed with regard to two of the most important solution concepts in game theory is the nash equilibrium and pareto optimality .nash equilibrium is defined as the combination of strategies for which no player gains by unilaterally changing their strategy .pareto optimality occurs when no player can rise its payoff without lowering the payoff of others .following the scheme presented in , in the quantum version of the minority game , each player is provided with a qubit from an entangled set .strategy of player is played by doing a local unitary operation on the players own qubit , by applying its strategy operator su(2 ) . will be parameterized in the following way : with ] .the game starts out in an entangled initial state . where is usually taken to be a qubit ghz - state , from which each player is provided with one qubit [ 2][3 ] .the final state of the game becomes to calculate the expected payoff of player we take the trace of the final state multiplied with the projection operator of the player .the projection operator projects the final state onto the desired states of player . the sum is over all the different states , for which player is in the minority . for , we have the following projection operator for player 1 . . in the 6-player game, each player has a sum of 12 such states .the expected payoff is finally given by : .\ ] ] the local unitary operations of the players eliminates the possibility for the system to end up in most states where nobody wins , and therefore yields higher than classical payoff .as a generalization of the broadly used ghz - state as the initial state we consider a superposition with products of symmetric bell pairs .a four qubit version of this state was used in a experimental implementation of a quantum minority game by c. schmid and a.p .flitney in . the parameter ] , by letting one player deviate from the ne solution , by playing .the following inequality holds for a nash equilibrium : a ghz - state can be created by acting with an entanglement operator on a product state , where then have where $ ] is a parameter that controls the level of entanglement .this gives an output state of the following form maximum is reached for if is used as initial state for a quantum minority game , the nash equilibrium payoffs will depend on the parameter .for the classical payoffs are obtained , since the game starts out in an unentangled initial state .a -player generalization has been conjectured : where is the classically obtainable payoffs for for classical ne strategies , and for the quantum versions .a six player game could use a product of two three qubit w - states as its initial state . where is a symmetric superposition of nine states with four qubits in the -state and two in the -state , compactly written as .this state therefore has tree minority combinations for each player , and no undesired states !the game simply starts out in the best possible configuration , and the only thing the players should do is to apply the identity operator , to obtain an expected payoff 1/3 , the theoretical maximum for a six - player game .this solution pareto optimal , compared to the six - player game starting with an ghz - state , which is not .e. w. piotrowski , j. sladkowski , an invitation to quantum game theory , international journal of theoretical physics springer , 42 ( 5 ) : 10891099 1 , ( 2003 ) .s. c. benjamin , p.m. hayden , phys .lett 87 , 069801 ( 2001 ) . c. schmid , a.p .flitney , experimental implementation of a four - player quantum game , new j. phys .12 , 063031 , ( 2010 ) .a. p. flitney , a. greentree , coalitions in the minority game : classical cheats and quantum bullies , elsiever science , ( 2008 ) q. chen , y. wang , n - player quantum minority game , physics letters a , a 327 , 98,102 , ( 2004 ) .a. p. flitney , l. c. l. hollenberg , multiplayer quantum minority game with decoherence , quant .comput . 7 , 111 - 126,(2007 ) .
|
game theory is the mathematical framework for analyzing strategic interactions in conflict and competition situations . in recent years quantum game theory has earned the attention of physicists , and has emerged as a branch of quantum information theory . with the aid of entanglement and linear superposition of strategies , quantum games are shown to yield significant advantage over their classical counterparts . in this paper we explore optimal and equilibrium solutions to quantum minority games . initial states with different level of entanglement are investigated . focus will be on 4 and 6 player games with some -player generalizations .
|
in pressurized water reactors of nuclear plants , the pressure vessel constitutes one element of the second safety barrier between the radioactive fuel rods and the external environment .it is made of 16mnd5 ( a508 ) steel which is forged and welded . in case of operating accidents such as loca ( _ loss of coolant accident _ ), the pressure vessel is subjected to a pressurized thermal shock due to fast injection of cold water into the primary circuit .if some defects ( _ e.g _ cracks ) were present in the vessel wall this may lead to crack initiation and propagation and the brittle fracture of the vessel .the detailed study of the embrittlement of 16mnd5 steel under irradiation is thus a great concern for electrical companies such as edf .the brittle fracture behavior of the 16mnd5 steel has been thoroughly studied in the last decade using the local approach of fracture theory and the so - called beremin model , which assumes that cleavage is controlled by the propagation of the weakest link between a population of pre - existing micro - defects in the material .this approach has been recently coupled with polycrystalline aggregates simulations , .the main idea is to model a material representative volume element ( rve ) as a polycrystalline synthetic aggregate and compute the stress field under given load conditions . as a post - processing a statistical distribution of defects ( carbides )is sampled over the volume . in each gauss point of the finite element mesh the cleavage criterion is attained somewhere along the load path if a ) the equivalent plastic strain has attained some threshold ( cleavage initiation ) and b ) a griffith - like criterion applied to the size of the carbide in this gauss point is reached ( cleavage propagation ) . within the weakest link theory the failure of a single critical carbide induces the failure of the rve . from a single rve simulation ( i.e. a single stress field ) various distributions of carbidesare drawn , each realization leading to a maximal principal stress associated to failure .then the distribution of these quantities is fitted using a weibull law . in such an approach, the current practice of computational micromechanics assumes that the rve is large enough to represent the behavior of the material so that a single polycrystalline analysis is carried out ( the large cpu required by polycrystalline simulations also favours the use of a single simulation ) .however it is believed that numerous parameters such as grain geometry and orientation may influence the stress field and thus the final result .the connection between micromechanics and stochastic methods has been given much attention in the past few years , as shown in .many papers are devoted to developing probabilistic models for reproducing a random microstructure , .the specific representation of polycrystalline microstructures has been addressed in among others .the propagation of the uncertainty on the microstructure through a micromechanical model in order to study the variability of the resulting strain and stresses has not been addressed much though ( see ) . in this paperit is proposed to identify the properties of a stress random field resulting from the progressive loading of a polycrystalline aggregate .more precisely , assuming that the stress random field is gaussian , a procedure called _ periodogram method _ is devised , which allows one to identify the correlation structure of the resulting stress field . the paper is organized as follows : in section 2 basics of gaussian random fields are recalled and the periodogram method is presented . the polycrystalline aggregate computational model is detailed in section 3 .the methodology for identifying the correlation structure of the resulting stress field is presented in section 4 .two application cases are then investigated , namely an aggregate with fixed grain boundaries and random crystallographic orientations ( section 5 ) and an aggregate with both random geometry and orientations ( section 6 ) .the variance of the resulting stress field as well as the spatial covariance function and its correlation lengths is investigated in details .the properties of the identified random fields will be used in a forthcoming study in the context of the local approach to fracture , as explained above .in this section an identification method called _ periodogram _ is presented , which uses an estimator of the _ power spectral density _ ( psd ) in order to identify the correlation structure of a _gaussian homogeneous random field_. based on original developments by and for unidimensional fields , it has been extended to two - dimensional cases by .as it relies upon the use of the fast fourier transform ( fft ) its computational efficiency is remarkable .a gaussian random field is completely defined by its mean value , its standard deviation and its auto - covariance function .it is said _ homogeneous _ if the mean value and the standard deviation are constant in the domain of definition of and the auto - covariance function only depends on the shift .let us introduce the _ statistical moment _ and the spatial average : the field is said _ ergodic _ if its ensemble statistics is equal to the spatial average , .several popular covariance models for two - dimensional homogeneous random fields are presented in table [ tab:01 ] . in this table , is the constant standard deviation of the field , are the components of the shift in the two directions , are the correlation lengths in the two directions .gaussian and exponential models are plotted in figure [ fig - ii.1 ] for the sake of illustration . note that we call _ correlation length _ the parameters that appear in the definition of the covariance functions .this is not to be confused with the _ scale of fluctuation _ , which combine both the shape of the covariance function and the lengths . in one dimension , denoting by the autocorrelation function , the scale of fluctuation may be defined by : which reduces to for the exponential correlation function and for the gaussian case .similar expression are available in two and three dimensions , see .lll model & covariance function & power spectral density + exponential & ] & + wave & & + triangle & & + + + if and 0 otherwise + if and 0 otherwise the _ power spectral density _ ( psd ) of the random field is the fourier transform of its covariance function as a result of the wiener - khintchine relationship .the following relationships hold : the psd of the gaussian and exponential covariance models are presented in table [ tab:01 ] .one considers an ergodic homogeneous random field , for which a _ single _ realization is available . if the random field was defined over an _domain , the classical estimation of the covariance function would be : by definition , the fourier transform of the covariance estimation is an estimation of the psd . where denotes the modulus operator . in practice, the problem is to estimate the periodogram from a _ limited amount _ of data gathered on grid . due to symmetry ,the covariance estimation in eq.([eq - ii.5 ] ) is recast as follows : by taking the expectation of the above equation one gets : & = \frac{n - k}{n } \frac{m - l}{m } \mathbb{e } \left [ z(x_{1n } + h_{1k},x_{2 m } + h_{2l } ) z(x_{1n},x_{2 m } ) \right ] \\ & = \frac{n - k}{n } \frac{m - l}{m } c(h_1,h_2 ) \end{split}\ ] ] the latter equation exhibits some bias term between the expectation of the estimator and the covariance function . using the symmetry of the covariance function , one can write : = w_b(k , l ) c(h_1,h_2)\ ] ] where is the triangle window , also known as the _ bartlett window _( figure [ fig - ii.3 ] ) : consequently the expectation of the periodogram estimation becomes : & = \mathcal{f } \left\ { \mathbb{e } \left [ \hat{c}(h_{1k},h_{2l } ) \right ] \right\ } = \mathcal{f } \ { w_b(k , l ) c(h_1,h_2)\ } \\ & = w_b(f_1,f_2 ) * s(f_1,f_2 ) \end{split}\ ] ] where and respectively denote the 2d fourier transform operator and the fourier transform of the bartlett window and denotes the convolution product .this window tends to a dirac pulse when tend to infinity and tends to a unit constant .thus the periodogram estimation is asymptotically unbiased .however it is not consistent since its variance does not tend to zero . furthermore using this windowleads to a convolution product which introduces additional computational burden . hence in practice ,the _ modified periodogram _ presented in the next section is used to estimate the psd of the random field .the modified periodogram is built up in order to avoid the convolution product with the transformed window in eq.([eq - ii.11 ] ) . in this approach ,the data is multiplied directly with the window _ before _ the fourier transform is carried out .it aims at filtering the data to limit the influence of long distance terms and to focus on the information given by the short distance terms .this leads to the following estimate : where is the energy of the window calculated by : and denote the size of the two - dimensional domain .various window functions are proposed in , see table [ tab:03 ] . in this paperwe will use mainly the blackman window ( figure [ fig - ii.3 ] ) . [ !ht ] ll model & window equation + bartlett & + + hann & \left [ 0.5 + 0.5 \mbox{cos}(\frac{\pi l}{m})\right ] & \mbox{if ; }\\ 0 & \mbox{otherwise } \\\end{array } \right . |k| \leq n |l| \leq m ] + + blackman & \left [ 0.42 + 0.5 \mbox{cos}(\frac{\pi l}{m } ) + 0.08 \mbox{cos}(\frac{2\pi l}{m})\right ] & \mbox{if ; }\\ 0 & \mbox{otherwise } \\\end{array } \right . $ ] + as shown in section [ section-2.2.1 ] , the estimation of the periodogram is asymptotically unbiased , however not consistent since its variance does not tend to zero when tend to infinity .the averaging of the modified periodogram will solve this problem .assume that realizations of the random field are available .for each realization , one calculates the periodogram as in eq.([eq - ii.12 ] ) : with .then one calculates the average periodogram by : therefore the variance of the average periodogram is : it is then obvious that this variance tends to zero when tends to infinity , making the `` average modified periodogram '' approach more robust . as a summary ,the algorithm to estimate the psd of a random field from realizations may be decomposed into the four following steps : 1 .multiplication of each realization by a selected window , _e.g. _ the blackman window ( see table [ tab:03 ] ) ; 2 .computation of 2d fourier transform of the product of the current realization by the filtering window ; 3 .computation of the modulus of the result to obtain the psd estimation of each realization ; 4 .averaging of the psd estimations .once the empirical periodogram has been computed , a _ theoretical _periodogram is selected ( gaussian , exponential , etc ., see table [ tab:01 ] ) and fitted using a least - square procedure . in case of multiple potential forms for the theoretical periodogram the best fitting is selected according to the smallest residual .in this section the computational mechanical model used in this study is presented .it simulates a tensile test on a bidimensional polycrystalline aggregate under plane strain conditions .the various ingredients are discussed , namely : * the microstructure of the material and its synthetic representation ; * the material constitutive law ; * the boundary conditions applied onto the aggregate ; * the mesh used in the finite element simulation . the material is a 16mnd5 ferritic steel with a granular microstructure .the ferrite has a body centered cubic ( bcc ) structure .three families of slip system should be taken into account , namely , , . however , following it is assumed that the glides on the plane 123 are a succession of micro - glides on the planes 110 , 112 .this leads to consider only the two first families , which yields 24 slip systems by symmetry .the model for crystal plasticity chosen in this work has been originally formulated in within the small strain framework .the total strain rate is classically decomposed as the sum of the elastic strain rate and plastic strain rate . elastic part follows the hooke s law and the plastic part is calculated from the shear strain rates of the active slip systems . where is the shear strain rate of the slip system and is the schmid factor which presents the geometrical projection tensor .the latter is calculated from the normal vector to the gliding plane and the direction of gliding . the resolved shear stress ( rss ) of the slip system is the projection of the stress tensor via the schmid factor . the shear strain rates of each slip system are the internal variable that describes plasticity .the evolution of these variables depends on the difference between the rss and the actual _ critical rss _ in an elastoviscoplastic setting : where and are material constants , and if and 0 otherwise .note that this formula corresponds to an elastoviscoplastic constitutive law but the viscous effect will be negligible if sufficiently large values of and are selected .its power form allows one to automatically detect the active slip systems .all the systems are considered active but the slip rate is significant only if the rss is much higher than the critical rss .this procedure allows one to numerically smooth the elastoplastic constitutive law .the critical rss evolves according to the following isotropic hardening law : where .the exponential term presents the hardening saturation in the material when the accumulated slip is high . is the _ initial critical rss _ on the considered system . and are parameters which depend on the material . is the hardening matrix of size whose component presents the hardening effect of the system on the system . in the present work , one considers only two families of slip systems named , .thus the hardening matrix is completely defined by four coefficients only .the values of these coefficients and this matrix are presented in .all the parameters describing crystal plasticity for 16mnd5 steel are gathered in table [ tab:04 ] .c c c c c c c c c c c & & + & & & & & & & & & & + & & & & & & & & & & + & & & & & & & & & & + the construction of the aggregate is based on the voronoi polyhedra model , generated in this work with the quickhull algorithm .the geometry of the resulting synthetic aggregate , which is a simplified representation of the real microstructure of the 16mnd5 steel , is shown in figure [ fig - iii.4 ] .it corresponds to a square of size 1,000 ( this is a relative length which shall be scaled with a real length depending on the grain size ) .grain boundaries are considered as perfect interfaces .note that more detailed models of grains have been proposed recently using so - called laguerre tessellations in order to better fit the observed distributions of grain size , see . the same crystallographic orientation , defined by the three euler angles , , , is randomly assigned to all integration points inside each individual grain using a uniform distribution . in figure[ fig - iii.4]-a , the color of each grain corresponds to a given crystallographic orientation .the mesh is generated by the algorithm of the salome software ( http://www.salome-platform.org ) .the mesh of the generated specimen is presented in figure [ fig - iii.4]-b .the finite elements are quadratic 6-node triangles with integration points .the boundary conditions applied onto the aggregate are sketched in figure [ fig - iii.5 ] .the lower surface is blocked along the direction .the displacements are blocked at the origin of the coordinate system ( lower left corner ) . on the upper surface ,an homogeneous displacement is applied by steps in the direction up to a macroscopic strain equal to .the computation is carried out using the open source finite element software code_aster ( http://www.code-aster.org ) .the computational cost for such a non linear analysis is high .the number of degrees of freedom of the finite element model is .a parallel computing method based on sub - domain decomposition is used .one simulation of a full tensile test up to 3.5% strain requires about hours computation time when distributed over processors . in this section ,we present the result of the simulation of a tensile test on the 2d aggregate at different scales .we define the mean stress and strain tensor calculated in a volume by : figure [ fig - iii.6 ] shows the macroscopic strain / stress curve .it is observed that as expected whereas the uniaxial behaviour shows a global elastoplastic behaviour . at the mesoscopic scaleone can observe the mean strain - stress relationship in each grain as shown in figure [ fig - iii.7 ] . because of the different crystallographic orientations in each grain , the mean elastoplastic behaviour is different from grain to grain .furthermore , whereas the mean stress calculated in all the specimen is zero , the mean values calculated in each single grain are scattered around zero .this observation shows the first scale of heterogeneity of the material .the microscopic behaviour of a single grain ( grain # 24 , see tag in figure [ fig - iii.4 ] ) is finally studied . the mean behavior and the strain - stress relationship at each node of this grainare plotted in figure [ fig - iii.8 ] for four levels of macroscopic strain , namely . in this figurethe blue point represents the stress field within the grain for a macroscopic strain level .this single point shows that the stress field is homogeneous within the grain in the elastic domain .the red points represent the strees values in each node of the grain at macroscopic strain .one observes that the mean strain calculated for this single grain is and the maximal strain value in a specific node may attain about .similar effects are observed at other levels of macroscopic strain , which show the heterogeneity of the strain and stress fields at the microscopic scale .it is observed that the scattering around the mean curve increases with the macroscopic strain .indeed , for the final loading step corresponding to the mean strain in the grain is about 4.54% , while the local strain varies form 2.4 to 9% .in this section the method developed in section 2 is applied to the identification of the properties of the random stress field in polycristalline aggregate calculations .more specifically the _ maximal principal stress _field that is computed from repeated polycrystalline simulations is considered . throughout the paper this stress fieldis considered _gaussian_. this is a strong assumption which shall be considered as a first approximation . indeedthe maximal principal stress is positive in nature under the uniaxial loading that is considered and a gaussian assumption can not totally fit this feature . yetit is believed that the results obtained in terms of the description of the spatial variability ( covariance functions ) , which is the main outcome of the paper , will not be strongly influenced by this assumption .note that methods for identifying the properties of non gaussian random fields have been recently developed , see .the maximal principal stress field is assumed to be gaussian and homogeneous ( the latter assumption will be empirically checked as shown in the sequel ) .the periodogram method is applied using realizations of stress fields , 35 full elastoplastic analysis of aggregates up to a macroscopic strain of 3.5 % .the identification is carried out successively at various levels of the macroscopic strain .two cases are considered : * case # 1 : the grains geometry is the same for all the finite element calculations .only the crystallographic orientations are varying from one calculation to the other .* case # 2 : both the grains geometry and the crystallographic orientations vary .the input data of the identification problem is the maximal principal stress field obtained from the finite element calculations .as the periodogram method is based on a regular sampling of the random field under consideration , the brute result ( the maximal principal stress at the nodes of the mesh ) has to be projected onto a regular grid .this operation is carried out using internal routines of code_aster .note that a slice of width ( of total size ) is discarded along the edges of the aggregate in order to avoid the effect of boundary conditions on the computed stress field , as suggested in .a typical maximal principal stress field is shown in figure [ fig - iv.10 ] .as it was described in section 2 the periodogram method assumes that the random field under consideration is _homogeneous_. from the available realizations one first checks empirically this assumption using the following approach : * the _ ensemble mean and variance _ of the field is computed point - by - point throughout the grid for an increasing number of realizations : if the field is homogeneous these quantities should tend to constant values that are independent from the position when tends to infinity . * in order to measure the magnitude of the spatial fluctuation of the latter , the _ spatial average _ and _ spatial variance _ of a realization of a field sampled onto a grid is defined by : whereas the associated `` spatial '' coefficient of variation is defined by : * the spatial coefficient of variation of the ensemble mean and variance ( eqs.([eq - iv.25])-([eq - iv.26 ] ) ) are computed and plotted as a function of . if the underlying random field is homogeneous it is expected that the curves of and converge to zero . from a visual inspection of the obtained empirical periodogramsit appears that a gaussian or an exponential model of periodogram such as those presented in table [ tab:01 ] may be consistent with the data .however it appeared in the various analyses that the peak of the periodogram is not always in zero .an _ initial frequency _ is thus introduced which shifts the theoretical periodogram . finally , due to lack of fitting of the single - type periodogram ( gaussian and exponential ) ,a combination thereof is also fitted .the most general model finally reads : l_{y1 } \mbox{exp } \left[\pi^2 l_{y1}^2 ( f_y - f^{(1)}_{y0})^2 \right ] \\ & + \sigma_2 ^ 2 \frac{2l_{x2}}{1 + 4\pi^2 l_{x2}^2 ( f_x - f^{(2)}_{x0})^2 } \frac{2l_{y2}}{1 + 4\pi^2 l_{y2}^2 ( f_y - f^{(2)}_{y0})^2 } \end{split}\ ] ] where are correlation lengths in each direction and ( aniso - tropic field ) for each component ( 1 ) ( gaussian part ) and ( 2 ) ( exponential part ) .similarly are initial shift frequencies .note that eq.([eq - iv.31 ] ) corresponds only to positive values of .the periodogram is then extended by symmetry for negative frequencies . in terms of associated covariance models, the linear combination of periodograms leads to a linear combination of covariance models .the initial frequency shift in the periodogram leads to oscillatory cosine terms in the covariance by inverse fourier transform : \mbox{cos}(2\pi f^{(1)}_{x0}h_x ) \mbox{cos}(2\pi f^{(1)}_{y0}h_y)\\ & + \sigma_2 ^ 2 \mbox{exp}\left[-(\frac{|h_x|}{l_{x2}}+\frac{|h_y|}{l_{y2 } } ) \right ] \mbox{cos}(2\pi f^{(2)}_{x0}h_x ) \mbox{cos}(2\pi f^{(2)}_{y0}h_y ) \end{split}\ ] ] in order to compare the various fittings the least - square residual between the empirical periodogram ( eq.([eq - ii.15 ] ) ) and the fitted periodogram is finally computed .the following non dimensional error estimate is used : ^ 2 } / \max_{(f_x , f_y)}\bar{s}(f_x , f_y)\ ] ]first the homogeneity of the maximal principal stress field is checked using the methodology proposed in section [ section-4.2 ] .figure [ fig - iv.12 ] shows the evolution of and .these quantities regularly decrease and it is seen that they would tend to zero if a larger number of realizations was available .this leads to accepting the assumption that the random field is homogeneous since the fluctuations around the constant spatial average tend to zero when increases .the average empirical periodogram obtained from realizations of the maximal principal stress field at 3.5% of macroscopic strain is plotted in figure [ fig - iv.13]-a .macroscopic strain ( b ) best fitted periodogram , title="fig:",scaledwidth=48.0% ] macroscopic strain ( b ) best fitted periodogram , title="fig:",scaledwidth=48.0% ] table [ tab:05 ] presents the results of the fitting of the average empirical periodogram calculated from 35 realizations of the field using three models , namely gaussian , exponential and a mixed `` gaussian + exponential '' as in eq.([eq - iv.31 ] ) .cccccccccccc & & & + & ( eq.([eq - iv.33 ] ) ) & & & & & & & & & & + gaussian & & & & & & & & & & & + exponential & & & & & & & & & & & + mixed & & & & & & & & & & & + from the results in table [ tab:05 ] it appears that the mixed model provides a significantly smaller least - square error than that obtained from the gaussian and exponential models respectively .the corresponding fitted periodogram is plotted in figure [ fig - iv.13]-b . in order to better appreciate the quality of the fitting , two - dimensional cuts of the empirical ( resp .fitted ) periodogram are given in figures [ fig - iv.16][fig - iv.15 ] .figure [ fig - iv.16 ] corresponds to a cut along the direction for two values of .figure [ fig - iv.17 ] corresponds to a cut along the direction for two values of .finally figure [ fig - iv.15 ] corresponds to a cut along the diagonal .direction , title="fig:",scaledwidth=48.0% ] direction , title="fig:",scaledwidth=48.0% ] direction , title="fig:",scaledwidth=48.0% ] direction , title="fig:",scaledwidth=48.0% ] , scaledwidth=48.0% ] from the above figures it appears that the fitting of the empirical periodogram by a mixed model is remarkably accurate .it is interesting to interpret the fitted parameters reported in table [ tab:05 ] .first it is observed that the amplitude of each component of the mixed periodogram is similar since .the variance of the field is equal to 6,309 mpa .the associated standard deviation is 79.4 mpa .as the mean principal stress is 720 mpa at 3.5% macroscopic strain , the coefficient of variation of the field is about 11% . in order to interpret the correlation length parameterslet us define the mean size of a grain such a two - dimensional aggregate .as the volume of edge length equal to 1,000 corresponds to 100 grains , the equivalent diameter of a single grain reads : thus the correlation lengths obtained from the fitting vary from 0.55 to 1.3 .this shows that the characteristic dimension of the underlying microstructure ( ) is of the same order of magnitude as these parameters . in other wordsthe scale of local fluctuation of the stress field is related to the grain size , as heuristically expected .moreover , it appears that the lengths in the and directions are almost identical .the stress field does not show any significant anisotropy in this case . in this section the stability of the fitted parameters as a function of the number of available realizations used in the average periodogram methodis considered . in practicethe procedure applied in the previous paragraph is run using realizations of the stress field .the evolution of the standard deviations is shown in figure [ fig - iv.18 ] .the evolution of the correlation lengths is shown in figure [ fig - iv.19 ] .the evolution of the initial frequencies is shown in figure [ fig - iv.20 ] .from these figures it appears that the fitted parameters tend to a converged value when at least 20 realizations of the stress field are used for their estimation . in this section the evolution of the parameters of the fitted periodograms as a function of the macroscopic strainis investigated . for this purposethe methodology presented in section [ sec:5 - 2 ] is applied using the realizations of the maximal principal stress fields corresponding to various levels of the loading curve , various values of the equivalent macroscopic strain .the evolution of the standard deviations is shown in figure [ fig - iv.18a ] .the two components of the periodogram ( gaussian and exponential ) contribute for approximately the same proportion to the total variance of the field since the curves are almost superimposed .note that these standard deviations increase with the applied load in the same way as the mean load curve ( figure [ fig - iii.6 ] ) .the evolution of the correlation lengths is shown in figure [ fig - iv.19a ] .the evolution of the initial frequencies is shown in figure [ fig - iv.20a ] .it is observed that once plasticity is settled ( once the macroscopic strain is greater than ) the parameters describing the fluctuations of the maximal principal stress field are almost constant .this conclusion is valid for both the correlation lengths and the initial frequencies .note that the convergence is faster for the parameters related to the direction , the direction that is transverse to the one - dimensional loading .finally it is also observed that is almost equal to zero whatever the load level , thus the zero value in table [ tab:05 ] .in this section both the randomness in the grain geometry and in the crystallographic orientations are taken into account .a total number of 35 finite element models are run . in each case, the grain geometry is obtained from a uniform sampling of points from which a vorono tessellation is built . as in section [ section-5 ] the homogeneity of the maximal principal stress fieldis checked using the methodology proposed in section [ section-4.2 ] .figure [ fig - iv.22 ] shows the evolution of and .these quantities regularly decrease and it is seen that they would tend to zero if a larger number of realizations was available .this leads to accepting the assumption that the random field is homogeneous .the average empirical periodogram obtained from realizations of the maximal principal stress field at 3.5% of macroscopic strain is plotted in figure [ fig - iv.23]-a .three types of theoretical periodograms have been fitted as in the previous section , which lead to the conclusion that the mixed model that combines a gaussian and an exponential component is best suited .the fitted parameters are gathered in table [ tab:06 ] where the parameters fitted for case # 1 are also recalled for the sake of comparison .macroscopic strain ( b ) best fitted periodogram , title="fig:",scaledwidth=48.0% ] macroscopic strain ( b ) best fitted periodogram , title="fig:",scaledwidth=48.0% ] in order to check the accuracy of the fitting , two - dimensional cuts of the empirical ( resp .fitted periodogram ) are plotted in figure [ fig - iv.23 ] ( cut along the direction ) , figure [ fig - iv.24 ] ( cut along the direction ) , figure [ fig - iv.25 ] ( cut along the diagonal ) .again the fitting is remarkably accurate , meaning that the fitted model of periodogram accurately represents the spatial variability of the maximal principal stress field .direction , title="fig:",scaledwidth=48.0% ] direction , title="fig:",scaledwidth=48.0% ] direction , title="fig:",scaledwidth=48.0% ] direction , title="fig:",scaledwidth=48.0% ] , scaledwidth=48.0% ] it can be observed from the figures in table [ tab:06 ] that the fitting is of equal quality in both cases ( relative error less than ) .as far as the contribution of each component of the periodogram is concerned , the symmetry reported in case # 1 is not existing anymore since the standard deviation of the exponential contribution ( ) is much greater than that of the gaussian part ( ) .the total variance of the field is 7940 mpa , corresponding to a standard deviation of 89.1 mpa and a coefficient of variation of 12% .thus there is a little more scattering in the random stress field obtained in case # 2 when considering both the random grain geometry and orientations .cccccccccccc & & & + & ( eq.([eq - iv.33 ] ) ) & & & & & & & & & & + case # 1 & & & & & & & & & & & + case # 2 & & & & & & & & & & & + the correlation lengths associated with the exponential part do not differ much in case # 2 compared to case # 1 ( corresponding here to 1.5 to 2.4 ) .in contrast the correlation lengths related to the gaussian part are increased , which tends to produce less rapidly varying realizations .this may be explained by the fact that the grain boundaries are `` averaged '' in case # 2 whereas they were fixed in case # 1 .the stress concentrations that are usually observed at the grain boundaries are thus smoothed in case # 2 compared to case # 1 . in this section oneconsiders the stability of the fitted parameters as a function of the number of available realizations used in the average periodogram method .the procedure applied in the previous paragraph is run using realizations of the stress field .the evolution of the standard deviations and the initial frequencies is shown in figure [ fig - iv.26 ] ( note that in the present case ) . the evolution of the correlation lengths is shown in figure [ fig - iv.27 ] . from these figures it clearly appears that the fitted parameters are almost constant when the number of realizations of the stress field used in their estimation increases .the minimal number of could be used here without significant errors although it is recommended to keep a value of as in case # 1 for robustness . finally the evolution of the parameters of the fitted periodograms as a function of the macroscopic strain is investigated . for this purposethe identification method is applied using the realizations of the maximal principal stress fields corresponding to various levels of the loading curve , various values of the equivalent macroscopic strain .the evolution of the two standard deviations look similar to the results obtained in case # 1 ( figure [ fig - iv.28 ] ) .it is observed that the ratio is almost constant all along the loading path up to 3.5% strain .as far as the initial frequencies are concerned , there is a complete independance with the load level as soon as is greater than , when plasticity has settled in the aggregate .the same conclusion can be drawn for the various correlation lengths .the distribution of stresses in a material at a microscopic scale ( where heterogeneities such as grain structures are taken into account ) has been given much attention in the context of computational homogenization methods .however the current methods usually stick to a deterministic formulation . starting from the premise that any representative volume element ( such as a polycrystalline aggregate ) is a single specific realization of a random quantity, the present paper aims at using methods of computational stochastic mechanics for representing the ( random ) stress field . after recalling the basic mathematics of gaussian random fields ,the paper presents a _ periodogram method _ for estimating the parameters describing the spatial fluctuation of a random field from a collection of realizations of this field .this method is adapted in two dimensions from well - known techniques originating from signal processing .the material under consideration , namely the 16mnd5 steel used in nuclear pressure vessels is then presented together with a local modelling by polycrystalline finite element calculations . from a collection of 35 realizations of the ( maximal principal ) stress field , the spatial correlation structure of the latter is identified . by fitting various theoretical periodograms , a mixed model combining a gaussian and an exponential - type contributionis retained .these two contributions may be empirically interpreted as follows : the gaussian part corresponds to the fluctuation from grain to grain ; the ( less smooth ) exponential component corresponds to the sharp grain boundaries stress concentrations .two cases are considered , namely a `` fixed - geometry '' case in which only the crystallographic orientations changes within the 35 realizations ( fixed grain boundaries ) , and a `` variable geometry '' in which the grain geometry is randomly sampled for each realization . in both cases ,a good convergence of the procedure is observed when the number of realizations increases .a set of 20 realizations is recommended , although good results are already obtained for realizations in case # 2 .moreover it is shown that the correlation lengths are of the same order of magnitude as the grain size .the initial frequencies that are required for a best fitting of the periodogram and that translate into some kind of spatial periodicity in the covariogram could be explained by spurious edge effects due to the limited size of the aggregate .this should be investigated more in details in further analysis .another important result is drawn from the comparison of the fitted parameters at various load levels . once plasticity is settled within the aggregate , the parameters describing the spatial fluctuations of the field are almost constant .moreover the variance of the field ( sum of the variance of each component of the periodogram ) increases proportionally to the mean strain / stress curve , meaning that the coefficient of variation of the stress field is almost constant ( around 11% for the fixed geometry and 12% for the variable geometry ) .the results presented in this paper should be confirmed by additional investigations under different types of loading ( biaxial loading ) .the tools that are presented here may be applicable to three - dimensional aggregates and stress fields at a much larger computational cost though .this work is currently in progress .the identified stress fields may eventually be re - simulated : new realizations of the stress fields are straightforwardly obtained at a low computational cost by random field simulation techniques such as the spectral approach or the circulant embedding method .this allow us to apply local approach to fracture analysis ( such as that presented in ) for the assessment of the brittle fracture of metallic materials , as shown in .* acknowledgements * + the second author is funded by a cifre grant at phimeca engineering s.a . subsidized by the french_ agence nationale de la recherche et de la technologie _ ( convention number 027/2010 ) .the research project is supported by edf r&d under contract # 8610-aap5910056413 .these supports are gratefully acknowledged .dang h , sudret b , berveiller m , zeghadi a ( 2011 ) identification of random stress fields from the simulation of polycrystalline aggregates . in : proc .computational modeling of fracture and failure of materials and structures ( cfrac ) , barcelona , spain dang hx , sudret b , berveiller b ( 2011 ) benchmark of random fields simulation methods and links with identification methods . in : faber m , khler j , nishijima k ( eds ) proc .11th int .conf . on applications of stat . and prob .in civil engineering ( icasp11 ) , zurich , switzerland dang hx , berveiller m , zeghadi a , sudret b , yalamas t ( 2013 ) introducing stress random fields of polycrystalline aggregates into the local approach to fracture .11th int .struct . safety and reliability ( icossar2013 ) , deodatis , g. ( ed . ) , new york , usa , 2013 mathieu j ( 2006 ) analyse et modlisation micromcanique du comportement et de la rupture fragile de lacier 16mnd5 : prise en compte des htrognits microstructurales .phd thesis , cole nationale suprieure des arts et mtiers mathieu j , berveiller s , inal k , diard o ( 2006 ) microscale modelling of cleavage fracture at low temperatures : influence of heterogeneities at the granular scale .fatigue fract eng mat struct 29(9 - 10):725737 perrin g , soize c , duhamel d , funfschilling ( 2014 ) a posteriori error and optimal reduced basis for stochastic processes defined by a finite set of realizations .siam / asa j. uncertainty quantification , 2(1):745762
|
the spatial variability of stress fields resulting from polycrystalline aggregate calculations involving random grain geometry and crystal orientations is investigated . a periodogram - based method is proposed to identify the properties of homogeneous gaussian random fields ( power spectral density and related covariance structure ) . based on a set of finite element polycrystalline aggregate calculations the properties of the maximal principal stress field are identified . two cases are considered , using either a fixed or random grain geometry . the stability of the method w.r.t the number of samples and the load level ( up to 3.5 % macroscopic deformation ) is investigated . * keywords : * polycrystalline aggregates crystal plasticity random fields spatial variability correlation structure
|
the area of the surface of a nonspinning , quiescent ( schwarzschild ) black hole is is , which is written in the usual relativist s convention in which units are chosen so that both newton s constant and the speed of light are set equal to unity .the usual schwarzschild coordinate is defined to be an areal coordinate ( spherical area = ) , so its value at the black hole surface ( the _ horizon _ ) is . because the black hole is spherical, we simply need to measure the area in a transverse direction .this produces the unique result ( ) .the uniqueness follows because if we consider a different definition of the 3-space in which we measure the area , we just shift our points in null directions along the ( null ) horizon .null directions have zero length and can not contribute to ( or change ) the area . an occasional question to the teacher of relativityis : ... then , what is the _ volume _ of a black hole ? "the answer is that , unlike the response about the surface , the volume depends on the way that the 3-dimensional constant - time " space containing the black hole is defined .gravity , described by general relativity , is the curvature of space - time , and the implied curvature for the defined by our choice of constant time depends on how the now " space is defined .the simplest black hole is an _ eternal _ black hole , one that was not formed by collapsing matter but is a nonlinear vacuum " solution , with structure anchored in its own gravitational field . even in this casethere are many choices of constant time , and hence many different results for the volume , of the chosen 3-space within the horizon .this article presents a pedagogical description of that ( well - known to the expert ) fact .all of the background needed for this paper can be found in _ , henceforth _ mtw_. we will also provide original references where appropriate .einstein s general relativity describes the gravitational field by giving the spacetime metric .a metric describes the way coordinate increments apply to measurable ( proper " ) space or time increments .the _ special _ relativity metric ( describing a spacetime without gravity ) is written : s^2 = -t^2 + _ ij x^i x^j [ eq : flatmetric ] here we use the _ summation convention _ : repeated indices are summed over through their range ; variables range through the spatial coordinates , and if , and if . (the superscripts are indices , not exponents . ) in eq.([eq : flatmetric ] ) , is a proper distance , so to make the units match , the first term on the right should have a factor , the square of the speed of light . as noted above , relativists simplify expressions by using units in which the speed of light is unity .( for instance : the length unit is the light year and the time unit is the year . ) also , it is useful to rewrite expression ( 1 ) using spherical coordinates : s^2 = -t^2 + f_ij x^i x^j [ eq : flatmetricspherical ] where now range through the spatial coordinates , and is a diagonal symmetric matrix = diag[1 , ( x^1)^2 , ( x^1)^2 sin^2 ( x^2)]$ ] .the metric is an example of a tensor , a geometrical object that is defined independently of any particular coordinate frame , and whose components follow specific rules for expression in different reference frames .both eq([eq : flatmetric ] ) and eq([eq : flatmetricspherical ] ) present the same geometrical object , the special relativity metric tensor , but expressed in different coordinate frames . within a year of the publication of einstein sgeneral relativity , schwarzschild obtained the general relativity metric which is the analog to the simplest newtonian gravitational field with .( do nt confuse the coordinate with the potential .) this generalizes eq.([eq : flatmetricspherical ] ) to s^2 = -(1 - ) t^2 + ( 1 - ) ^-1 r^2 + r^2 ^2 + + r^2 sin^2 ^2 .[ eq : schwarschild ] for large distances from the center , the schwarzschild form eq([eq : schwarschild ] ) indicates a moderate deviation from the special relativity form , and it can be shown that geodesic motion in this spacetime approximates newtonian motion quite closely . however ,if one considers smaller radii , the quantity = [ eq : potential ] can become significant , and apparently cause problems as it approaches unity ( the coefficient of diverges to infinity ; the coefficient of goes to zero ) .this situation was confusing because objects with were obviously extremely compact , so maybe this was a nonphysical configuration ; but orbits , particularly expressed in terms of the proper time of the infalling observer , showed this strange surface could be reached in the finite lifetime of the intrepid explorer willing to fall into the center of the field . only in 1960 did kruskal and szekeres recognize that the surface _ is _ special but is not singular , and show how to understand this fact .( here and henceforth we again set both and the newtonian constant equal to unity . )the trick is to realize that the schwarzschild coordinates are badly behaved near , and to introduce new time and radial coordinates ( called , the time coordinate , and , the new radial coordinate ) which behave well ( smoothly " ) there .( the angles just describe 2-dimensional spheres , and there is no reason to change _them_. ) to make the new coordinates behave well near requires _ singular _ transformations \{}\{ } , but this is justified by the fact that all particle and photon orbits remain smooth and continuous when expressed in the new coordinates .even more usefully , and are defined so that the radial coordinate speed of light , everywhere .the null lines are inclined at just as they are in a flat space diagram .this makes it easy to pick out timelike motion , or 3-spaces of constant time .the transformations giving \{ } in terms of \{ } are : ( -1 ) e^ = u^2-v^2 .[ eq : r(u , v ) ] tanh = , || 1 [ eq : t(u , v ) ] tanh = , || 1 [ eq : t2(u , v ) ] in eq([eq : r(u , v ) ] ) there is no analytic inverse for the single valued function of on the left hand side , but numerical solution is straightforward . the fact that there are two different analytical expressions relating to \{ } is partial evidence of the singular coordinate transformation .the inverse transformation giving the kruskal - szekeres coordinates \{ } in terms of \{ } , can be found in _mtw_. what is more useful than the analytic expressions eqs([eq : r(u , v ) ] - [ eq : t2(u , v ) ] ) is to graph the lines showing constant schwarzschild coordinates and , in a graph whose axes are the kruskal - szekeres coordinates and .see figure 1 . ,plotted in the kruskal , v } plane ( straight double - dot " lines passing through the origin ) ; and the hyperbolae where the schwarzschild radial coordinate ( double - dot " curves ) .we also show several values of kruskal coordinate ( heavy dot - dash " horizontal lines .the spacelike hyperbolae ( heavy curves ) are the locations where the schwarzschild .the curvature tensor is singular at ., width=576 ] from figure 1 we see that constant schwarschild coordinate is a straight line through the origin ; is the horizontal line coinciding with the line , is the line ( light solid positive - slope line in figure 1 passing through the origin ) , is the line ( light solid negative - slope line in figure 1 passing through the origin ) . from eq([eq : r(u , v ) ] ) , constant schwarzschild coordinate defines a hyperbola given by = .if the constant is positive , this defines the situation far from the black hole .however , if one considers smaller values for this constant , corresponding to smaller constant values of , the hyperboloids approach the straight lines , which are achieved when , when the left hand side of eq([eq : r(u , v ) ] ) is zero .thus overlays ( defines the same points as ) . outside the horizon , is a timelike surface ( its tangent is a timelike vector ) ; inside the horizon it is spacelike .the volume is a three dimensional concept , and it depends only on the spatial 3-dimensional part of the metric .this means , consider the 4-d metric ( eq ( 3 ) ) with the differential of the time coordinate set equal to zero ( in the schwarschild coordinates we consider in this section ) . from the resulting 3-dimensional metric ,compute the determinant , , of the matrix of metric components .g = , schwarzschild coordinates .[ eq : schwdet ] the volume between two values of , say and is then ^r_outer_r_inner d^3x .[ eq : schwvol ] we have been asked to compute the volume inside the horizon at a fixed schwarzschild time . thus the outer limit in the integral in eq([eq : schwvol ] ) is .looking at figure ( 1 ) , we see that on _ no _ schwarzschild slice " does the coordinate extend to less than .the limits are the same : , so we expect the integral to vanish .however , the integrand is singular at , so we must investigate the integral a little further .the angular integral just yields .the integral in has a singularity at , but this singularity is in fact integrable . we can investigate this by assuming the value of to be just larger than , say , where is small . thento lowest order in , the radial integral is ^r_outer_r_inner [ eq : schwestimate ] 2 ( 2m)^ [ eq : schwestanswer ] evaluated at the limits .this integral obviously vanishes as goes to zero , i.e. as ._ there is zero volume inside the black hole in any schwarzschild time slice of a schwarzschild black hole spacetime . _convincing oneself of the zero result of section 3 is made easier by considering a different time - independent form of the 4-dimensional metric describing the schwarzschild spacetime , which gives a nonzero volume inside the horizon .kerr - schild coordinates provide this form .coordinates called kerr - schild coordinates , which we denote , are related ( in our spherical case ) to the schwarzschild coordinates by the transformations : r_ks = r [ eq : rks(r ) ] t_ks = t + 2 m ln ( -1 ) [ eq : tks(r ) ] the form of the 4-dimensional metric in kerr - schild coordinates is : s^2 = -(1 - ) t_ks^2 + t_ks r_ks + ( 1 + ) r^2 + r^2 ^2 + r^2 sin^2 ^2 .[ eq : ksmetric ] the kerr - schild form of the metric is , like the schwarschild form , independent of the time coordinate ( here ) .it does however contain terms that describe cross terms in the measurement of distance , cross terms between increments in time and increments in radial coordinate . for the 3-d metric , however , these terms are irrelevant .the 3-metric is simply ^3 s^2_ks = ( 1 + ) r^2 + r^2 ^2 + r^2 sin^2 ^2 [ eq : ksmetric3d ] .it is also important to know where the points in the constant slice are located with respect to the standard kruskal - szekeres picture .figure 2 shows a series of surfaces . , plotted in the kruskal , v } plane . the radial coordinate in the kerr - schild systemis the same as the standard schwarzschild coordinate.,width=576 ] it can be seen that these constant slices differ significantly from the schwarzschild slices .they penetrate inside ( which the constant schwarzschild slices do not ) , and in fact extend inward to .we thus expect a nonzero result from calculation of the volume inside the sphere .the determinant of the 3-metric eq([eq : ksmetric3d ] ) is .thus the volume at constant is ^2m_0 dr_ks dd .[ eq : ksvol ] = ^2m_0 r_ks^2 sin dr_ks dd .[ eq : ksvol2 ] even though the integrand contains the first factor which becomes infinite at as , the factor moderates the divergence , and the integral is finite .in fact , since we consider the volume for , after doing the angular integrals the integrand for the integration is clearly between and so the volume is easily analytically estimated to be between , and .the complete integral can be done analytically ( mathematica helps ) , yielding : vol_ks = [ 7- ln ( ) ] 4 ( 2m)^3 .[ eq : ksvolevaluated ] = ( 6.567 ... ) ( 2m)^3 .both the schwarschild and the kerr schild definitions of time are such that the metric ( and thus the volume inside the horizon ) is static , _i.e. _ independent of the schwarzschild time , or of the kerr - schild time at which it is computed .however it is easy to see that a collection of nearby observers falling into a black hole would recognize a time dependent gravitational field ; the tidal force gets stronger as they fall in , getting older .novikov realized that one could use the _ proper _ time , the local internal time of each observer , to define a time slicing .he based his coordinates on a collection of observers who at one schwarzschild instant ( schwarzschild , also labeled ) are instantaneously each at rest at their own maximum value of the schwarzschild coordinate .each observer sets her watch to read zero at this one instant of time .novikov also introduced a radial coordinate , called a _ comoving _ coordinate , defined by : for each observer , r^2 + 1= , [ eq : novikovcoordef ] at any later novikov time , each particle still has the same value of but has fallen to a smaller value of . for general values of one has only an implicit functional relation between and : = ( r^2 + 1)[-]^1/2 + ( r^2 + 1)^3/2 arccos [ ( ) ^1/2 ] .[ eq : implicittau ] for each value of this is an implicit function giving , and in that particular novikov 3-space .eq([eq : implicittau ] ) holds for positive . for negative ,use the fact that the relation between and is even in : , and . once we have determined this relation, we can ( by taking the differential of eq ( [ eq : implicittau ] ) and setting ) evaluate and at any time .figure 3 shows the relation between novikov and kruskal - szekeres coordinates .( curved , roughly horizontal lines ) plotted in the kruskal diagram . coincides with , and the figure is reflection symmetric across . is the proper time of infalling observers , and is used as the time coordinate in novikov coordinates ., width=576 ] the novikov coordinate 4-metric is : s^2 = -^2 + ( ) ( ) ^2 r^2 + r^2 ( ^2 + sin^2 ^2 ) .[ eq : novmetric ] the appearances of in this equation should be eliminated so that only and appear ( using eq([eq : implicittau ] ) ) , but we keep the compact symbol to mean the function .the determinant of the 3-metric is ( ) ( ) ^2 r^4 sin^2 .[ eq : novdet ] thus the volume to be evaluated is ^r_outer_r_inner || r^2 sin dr d d [ eq : novvol ] the integrand is time dependent , because the factor depends on the novikov time .but , additionally , the _ limits _ of the integral are time dependent .for instance larger and larger values of fall toward the horizon as increases .hence the value increases monotonically as increases . because we have access to the factor , we can freely transform the integral between one expressed in coordinate , and one in coordinate .the 3-space is identical to the schwarzschild 3-space , so the volume evaluated for is the same as found for the standard schwarschild coordinate case : .this is consistent with the result from the integral ( eq([eq : novvol ] ) ) evaluated in terms of schwarschild coordinate .the upper limit in every case is .the lower limit on the initial ( ) space is also , because this is the minimum reached in the 3-space . atlater , since we are looking for the volume contained inside the horizon , continues to hold .however , decreases , because the observer originally at ( _ i.e. _ at ) falls inward .that person falls inward for a time of , whereupon she reaches and her world line terminates ( she is destroyed ) because of the arbitrarily large tidal forces at .she is the first observer to reach .the for each infalling observer stays fixed at its initial value , and in particular the first observer s novikov coordinate stays at .thus , between and , the inner boundary for the integral is , a value of that has to be evaluated numerically , but which decreases monotonically from to zero .we thus expect the volume inside the horizon to increase during this time .note also ( most easily seen from the diagram ) that is the smallest value of reached , but that the space inside the horizon on the left contains an equal volume , and is connected on this 3-space at .hence between and the correct integration contains a factor of : .for the singularity at intrudes to reduce the volume to that on only one side of .hence there is an instantaneous drop by half in the volume , at the time .subsequently to the limits in terms of remain fixed at , .however the volume is still a function of time because the quantities involving r in the integrand have different dependences on r , for different values of .the resulting - dependent volume is plotted in figure 4 . , which is also the proper time of the infalling observers defining the coordinates ., width=528 ]the 4-metric for the scharzschild spacetime expressed in kruskal - szekeres coordinates is : s^2_k = e^- ( -v^2 + u^2 ) + r^2 ( ^2 + sin^2 ^2 ) .[ eq : kruskalmetric ] where one must view the appearance of the function as shorthand for the function defined by eq([eq : r(u , v ) ] ) .we can investigate the time dependent ( _ i.e. _ -dependent _ ) volume inside the horizon in a 3-space in close analogy to the novikov analysis above .in fact , figure 1 shows that the 3-spaces of the kruskal - szekeres coordinates are like straightened versions of the novikov 3-spaces .completely analogously to the novikov case , an observer initially at , _i.e. _ , falls inward for a time of , whereupon she reaches and her world line terminates ( she is destroyed ) because of the arbitrarily large tidal forces at .she stays at the kruskal - szekeres coordinate as she falls . rather than compute by using the transformation to the coordinate r as we did for the novikov case , we find it simpler to compute the volume directly in the kruskal - szekeres coordinates , where the limits on the radial integration are and so long as , and , when . the determinant of the 3-metric is ( e^- ) r^4 sin^2 .[ eq : kruskaldet ] thus the volume to be evaluated is : ^u_outer_u_inner r^2 sin du d d. [ eq : kruskalint ] as a function of .the factor - of - two drop at arises because the throat pinches off " at that instant , separating two halves of the volume inside the horizon ., width=528 ] again as before , the initial ( ) slice corresponds to the schwarzschild coordinate 3-space , so the volume is zero .( this can also be seen because it is an integration with finite integrand from to when is zero . ) for somewhat larger , the limits expand , and the integral is nonzero . for spaces before the singularity appears , we include a factor because the space extends to the horizon on the left " ; at the time the singularity appears in the space ( the 3-space touches the singularity ; the 3-space evolves a singularity ) and the volume drops by half . subsequently the evolution is over limits , , and the volume continues to evolve .figure 5 presents the time evolution of the volume .the area of a schwarzschild black hole is unique , and can be defined by an idealized transverse measurement of a particular spherical surface .in contrast , the volume inside a black hole requires a definition of the particular 3-space in which the volume is computed , which may be explicitly time dependent , and an understanding of the ( possibly time dependent ) limits of the integral required to compute this volume . understanding these points and computing the volumes as we have done here introduces and uses a number of concepts and techniques of general use in relativistic calculation , andcan be a useful pedagogical tool .this work was supported by nsf grant phy 0354842 , and by nasa grant nng04gl37 g .we thank michael salamon for bringing this problem to our attention . c. w. misner , k. s. thorne , and j. a. wheeler , _ gravitation _ ( w.h .freeman , new york , 1970 ) .a. einstein , _ preuss .berlin , sitzber ._ 778 - 786 ( 1915 ) , ibid .799 - 801 ( 1915 ) , ibid .844 - 847 ( 1915 ) .k. schwarzschild , _ sitzber .berlin , kl ._ , 189 - 196 ( 1916 ) m. d. kruskal , _ phys .rev . _ * 1119 * 1743 - 1745 ( 1960 ) .g. szekeres , _ publdebrecen _ * 7 * 285 - 301 ( 1960 ) roy kerr , a. schild , in _ iv centenario della nascita di galileo galilei , 1564 - 1964 , _ p.222 g. barbera , editor ; pubblicazioni del comitato nazionale per le manifestazioni celebrative , firenze ( 1965 ) . for the spherical case we consider , these coordinates were previously discovered by eddington and rediscovered by finkelstein , so the spherical coordinates are often called eddington - finkelstein " coordinates .
|
the horizon ( the surface ) of a black hole is a null surface , defined by those hypothetical outgoing " light rays that just hover under the influence of the strong gravity at the surface . because the light rays are orthogonal to the spatial 2-dimensional surface at one instant of time , the surface of the black hole is the same for all observers ( _ i.e. _ the same for all coordinate definitions of instant of time " ) . this value is for nonspinning black holes , with newton s constant , speed of light , and mass of the black hole . the 3-dimensional spatial volume inside a black hole , in contrast , depends explicitly on the definition of time , and can even be time dependent , or zero . we give examples of the volume found inside a standard , nonspinning spherical black hole , for several different standard time - coordinate definitions . elucidating these results for the volume provides a new pedagogical resource of facts already known in principle to the relativity community , but rarely worked out . _ keywords : _ numerical relativity ; time slicing ; black hole .
|
in , an object called _ the velocity tensor _ was described .it comes from a generalization of the equation into a generally covariant form the two - dimensional matrix of the classical velocity tensor takes the form while in the relativistic case where is some arbitrary constant , and .as was shown in , the tensorial description has an obvious advantage over a standard description since it does not use the notion of the proper time and therefore it allows a description of non - uniform motions and systems with an arbitrary number of material points .it also provides a cornerstone for formulating a generally covariant mechanics .however , the velocity tensor deals solely with kinematical issues . to make the tensor description complete we need to introduce another tensorial object called _ the momentum tensor _ . by means of this tensorit is possible to solve dynamical problems .in classical and relativistic mechanics the following formula holds true : the tensorial equivalent of eq . ([ 6 ] ) is presumed to be where is the momentum tensor and is an influence of the exterior on a body .it should be stressed here that we do not assume _ a priori _ the relationship between and .the choice of the form of the mixed tensor comes from the assumption that the momentum tensor should be some function of the velocity tensor .since the velocity tensor is a function of a classical velocity , the momentum tensor is .in general , the momentum tensor is represented by a square matrix where the elements are some functions of velocity variable with time . in order to determine them ,we make use of the transformation relation for a mixed tensor .passing from an inertial reference frame to an inertial system that moves with velocity relative to , the momentum tensor transforms in accordance with the following formula or in matrix notation assuming that is form - invariant , i.e. , we arrive at a functional equation for in the form where is the velocity of a material point in the system and is its velocity in .it is easy to prove that after some simple substitutions and rearrangements in eq .( [ 10 ] ) we get the solution where is an arbitrary square matrix formed by constant elements .in this case we substitute in eq .( [ 11 ] ) the galilean transformation in the form and hence we get where all elements in eq .( [ tp5 ] ) are constant . since the above equation is only time - dependent , eq .( [ 7 ] ) leads to the expression where , and therefore we get and hence , in order to reconstruct the classical newtonian equation of motion we have to assume that where is mass of a material point and is a classical newtonian force in a two - dimensional spacetime . the choice of the sign in eq .( [ tpphi ] ) results from considerations in higher dimensional spacetimes .it results from eqs .( [ tp5 ] ) , ( [ fi0 ] ) and ( [ fi1 ] ) that only the element takes part in dynamical processes since no other coefficient appears in eq .( [ fi0 ] ) .therefore , the other elements may take arbitrary values and each specific choice among them will lead to the same dynamics . in particular , we may choose them in such way that the relation is satisfied . keeping in mind that is given by eq .( [ 3 ] ) , we get that the fact that in the considered case leads to the general assumption that the component plays a key role in the dynamics , and the components are auxiliary quantities that provide the formalism covariance . in the case of substituting into eq .( [ 11 ] ) the lorentz transformation given by we get that according to eq .( [ tppartial ] ) we obtain that \nonumber \\ & = & \gamma^{4}\dot{\beta}\left[\left(1+\beta^{2}\right)\left(\pi^{1}_{0}-\pi^{0}_{1}\right ) + 2\beta\left(\pi^{0}_{0}-\pi^{1}_{1}\right)\right ] , \\\nonumber \\\phi_{1 } & = & \partial_{0}\pi^{0}_{1}(\beta)=\partial_{0}\gamma^{2}\left[\pi^{0}_{1}+\beta(\pi^{1}_{1}-\pi^{0}_{0})-\beta^{2}\pi^{1}_{0}\right]\nonumber \\ & = & \gamma^{4}\dot{\beta}\left[\left(1+\beta^{2}\right)\left(\pi^{1}_{1}-\pi^{0}_{0}\right ) + 2\beta\left(\pi^{0}_{1}-\pi^{1}_{0}\right)\right ] . \end{aligned}\ ] ] as we can observe , generally all coefficients take part in the dynamics in this case since all of them are present in eq .( 20 ) . in order to illustrate the role of parameters let us consider a general case of dynamics where .after the integration of eq .( [ tpsilarel ] ) we find that =\phi_{0}t+c,\ ] ] where is an integration constant .taking into consideration the initial condition for we obtain that ,\ ] ] where and are the values for .if we additionally assume that ( i.e. ) then . substituting this into eq .( [ kwadrat ] ) and making simple rearrangements we arrive at the following : the solutions of the above equation are of the form in the standard formalism of the special theory of relativity , when a constant force is applied to a body one gets the following solutions of the equations of motion for a velocity : if we expect that eq .( [ rowkwad ] ) also has two symmetric solutions , we have to assume that hence in this case we find that ( green ) and ( red ) . , , are assumed here.,scaledwidth=50.0% ] it should be stressed here that the asymptotes of eqs .( [ betarelstan ] ) and ( [ betarel5 ] ) are identical , i.e. : as we can see from eq .( [ betarel5 ] ) , the constant plays the role of a `` renormalization '' constant for , hence it can be discarded without losing the generality of considerations. then matrix ( [ tp7 ] ) takes the form matrix ( [ tp1stw2 ] ) can also be rewritten as where the second matrix on the right hand side of eq .( [ tp1stw3 ] ) is constant in time .assuming that and , eqs .( [ tpsilarel ] ) and ( 21 ) turn into in order to compare it with the standard formalism of the special theory of relativity , let us recall that in the standard description the equation of motion is given by and therefore substituting this expression into eq .( [ tpsilarel2 ] ) we get this indicates that the assumption that in this formalism and the standard description is the same leads to the conclusion that for a force constant in time the component is not constant in time , and _vice versa_. however , the uniform motion ( ) in both formalisms occurs simultaneously .the non - trivial part of the matrix ( [ tp1stw3 ] ) can also be expressed by means of well - known relativistic quantities such as energy and momentum : therefore we get it should be highlighted here that as was mentioned before it is possible to choose a different special form of the relativistic velocity tensor matrix and consequently a different description of dynamics .for instance , by analogy with the non - relativistic solution , we can assume that the relation between the velocity tensor described by eq .( [ 4 ] ) and the momentum tensor is given by eq .( [ pfv ] ) . hence in order to reproduce eq .( [ pfv ] ) the general form of the momentum tensor matrix ( [ tp7 ] ) has to be reduced to the matrix where as we have shown for the non - relativistic case the constant can be identified with mass of a material point .it is easy to observe that form ( [ p2rel ] ) is obtained from eq .( [ tp7 ] ) , where all coefficients with the exception of vanish .therefore , eqs .( [ tpsilarel ] ) and ( 21 ) can be written down as : aim of this paper was to introduce a new dynamical object called the momentum tensor as an analogue to the kinematical velocity tensor , and therefore to complete the tensorial description of classical and relativistic mechanics .calculations show that the choice of constants in the momentum tensor matrix results in different models of dynamics in the relativistic case .another important fact is that the naturally assumed relation between the tensors : is just one among many .further investigations will focus on verifying the other models .i would like to thank prof .edward kapucik for his scientific advice , and also for useful comments and ideas on this subject .
|
* abstract : * this paper introduces a new object called _ the momentum tensor_. together with _ the velocity tensor _ it forms a basis for establishing the tensorial picture of classical and relativistic mechanics . some properties of the momentum tensor are derived as well as its relation with the velocity tensor . for the sake of clarity only two - dimensional case is investigated . however , general conclusions are also valid for higher dimensional spacetimes . pacs numbers : : 45.20.d- , 03.30.+p relativistic classical mechanics , velocity tensor , momentum tensor .
|
complex social networks associated to _ internet _ , like as e - mail lists , news services etc .do not have a structured architecture like a project in network communication engineering .they show some kind of synergia given by the great amount of users who form the mentioned community .this study faces some problems , some time neither cultural guidelines are taken into account and the results are generalized with quite different guidelines .this is why we have restricted our analysis to one language and to the technical communities as an experimental application of the theoretical tools in social networks .this work is organized in two sections : the first one is devoted to a brief introduction on social networks and the other is referred to the study of communities coming from e - mail and `` news services '' in spanish language .a social network is a set of relations ( links or edges ) among different elements ( nodes , vertices or actors ) .formally a network is a graph where is the set of vertices , is the set of edges and such .that means , to each edge a pair of vertices are assigned and they are known as ends of edge . recently forthe the networks study binary matrices are used , therefore an isomorphism exists , where is the set of square binary matrices of dimension _ nxn_. this matrix is called adjacency matrix ( am) .social scientists defined by convention that actors ( output , egos ) are placed in the rows , while the attributes or related actors ( input , alter ) in the columns .the am is symmetric since we are dealing with non directed graphs .there are _ multiple _ graphs in which more than one kind of edges are identified as : ( kinship , friendship ) ; ( theme , author ) ; etc. this would be quite useful for building social substructures although for building the required am is more complicated .usually we regard an associated am with some particular projection i.e. _ kinship_. otherwise graphs may be weighted , that means it is possible to assign a weight to each link .this gives us a non zero value associated to each am element .as can be seen the am has all sensitive information related with the social network in particular .it is worth to notice that the diagonal elements in the matrix are filled with zeros or are neglected in the algorithms since the self interaction has no sense .however , then have to be considered in their booleans products . on the other handsome other properties can be associated with the am .`` macroscopic '' properties among the actors as * path * : is a concatenation of vertices connected by edges so that no chosen edge is repeated where is the initial vertex and the final vertex .* geodesic * : is the path of minimal length among actors . if it is not exist , as in the case of the non connected graphs , the infinite value is taken like the length . the geodesic path is the optimal path between actors , because socially , are actors with strongest links .also there are `` microscopic '' properties as * clique*. this is a measurement of the transitive triades of the network .two definitions exist , an originating one of theory of graphs . which knows it like transitivity a graph : where is the cardinality of the set and the other one was formulated by watts y strogatz , known as * clustering * associated with the actor is defined as : and then averaged over the actors this is the `` clustering'' .as much in a case as in another one , the transitive triades are small groups that represent the balance or the natural state towards which tend the social relations . but in either case they are small groups .another `` microscopic '' property is the average degree of connection among the closest neighbors ( cn) defined as : where is the conditional probability the a vertex of degree is connected with another one with degree . on the other handwhen is an increasing function the network is called _ associative _ , and if is decreasing is called _ dissociative_. also we may characterize the network from rows histograms , known as * prestige of the actor * this is coincidently with the columns histogram known * popularity of the actor*. but at the moment another kind of statistics is used called the _ connectivity _ probability , . is the probability that a randomly chosen vertex have edges . according with the functional shape of the histogram s tail ( ) the network can be classified as * exponentials* , when ; * scale - free * , when with ; * broad - scale * , when is `` scale - free '' but with an abrupt cutoff ; and * single - scale * , when has a fast asymptotic decay . in mostly field experimentsthe * scale - free * network was shown as the dominant . in order to estimate the exponent the cumulative probability is used defined as : if in the tail , then .all the non ponderated properties of the relation among actors are included in the am .each index row is associated with the actor who generates the subject , * author root * and each column index with the actors involved in the thread of conversation * author descendent*. a lexicographical arrangement is not used but by prestige , in other words , a low index is assigned to the actor with the greatest absolute frequency and successively until the index of greater value corresponding to the author of less prestige . for the construction of the matrix we have not taken into account the self - answers and no the threaded demands ( without thread ) . due to this factthey are discarded in previous phases to the application of the algorithm . according with the standard rfc 822 and derivate rfc 2822 and 1036 the transmission format of the messages coming from electronic mail and news servicesshould be composed by some headers fields and a body in plain text .from all the fields sent in a single message the following fields are required to construct threads of messages : : from : : : this field contains the identity and direction of the person who sends the message subject : : : this field indicates the nature of the message .message - id : : : this field must be unique for each message the suggested format is `` '' . in - reply - to : : : refers to a or the * menssaje - id * where is the message is answered , if the message is new this field does not appear . before the use of threading algorithm , the messages go through a filter which extract the field of interest and delete unwanted message . this filter is written in_perl_ . since mostly of the codes for messages threading are not free domain it was necessary to write our own code in `` c '' language from an existing one gpl in `` java''language , modified for generating a list from the actors sequence related with a thread , having as the list rootthe actor whose give the beginning of the thread .this algorithm have prove its robustness in hundred thousand trials .a brief sketch of the algorithm is given .the algorithm is based on the handling of connected structures of data which are : .... container { message message ; container parent ; container child ; container next ; } ; .... the field `` message '' , may be * nil*. the structure `` message '' have the following fields : .... mensaje { char * subject ; char * message_id ; char * in - reply - to ; char * from ; } ; .... when the field `` in - reply - to '' , take the value * nil * is indicating the message father .an indexed table is associated where in index is `` message_id '' from the message parent .then a `` container '' root or parent is associated .after that using the threading algorithm a message data base descendent associated is built . a table ordered by absolute frequencies of appearance of the `` author '' of the message fatheris generated with decreasing order .finally a am is built from the previous results .for algorithm details see `` _ message threading of jamie zawinsky , technical report _ '' .we have taken as leading cases two mailing lists in spanish which represents the observations done in previous studies .one is a purely technical list called * lu * and another one with the same actors but with a more general thematic called * mix*. this work have as null hypothesis those obtained from the algorithm `` _ configuration model _ ''( cm) composed by 1200 vertices .this allow us to build random networks with a given probability density of edges by vertex , .we have used with , since for this value stands the behavior for for the null hypothesis and the real cases . as can be observed in fig-[fig:1 ] in both cases the asymptotic behavior of is agree with those proposed by the cm .this dissociated mixed behavior is quite different to those found in jazz communities or scientific collaborators network , due their behavior is purely mixed associative .the following is a comparative table between the calculated values of `` _ clique _ '' , * c * by using watts and strogatz s algorithm and the averaged geodesic * g * for each case . [ cols="<,^,^,^",options="header " , ] in order to calculate these parameters for real cases , non connected graphs were taken into account , that means that is not dense the closure adjacency matrix obtained from the warshall algorithm .this can be observed in fig-[fig:2 ] .this show different values from the averaged geodesic due to the fact that some actors are not linked .therefore we adopted an _ ad hoc _ criterion .the parameters were calculated in the maximal dense subgraphs where are the more popular actors which is according with _ in situ _ observations .that is , the behavior of a mailing list is given by the more related actors and not by the isolated or casuals answerers .because of this the number of vertices is reduced in 60% which have no relevance due to the huge number of actors .the cumulative probability is the most significative evidence of the difference between the null hypothesis and the real cases . as can be seen in fig-[fig:3 ]the tail ( ) is quite different from the theoretical straight line for * cm * as in``_scale free _'' networks .the behavior is `` _ single - scale _ '' . instead ofperform a linear fitting we use a quadratic fitting . in fig-[fig:4 ] may be notice the goodness of the fitting in the tail as the intermediate range .this give us the possibility of discard the behavior of the tail and replace for another .where for * lu * and for * mix*. in worth to notice the same actors are linked in different way if the themes are different , in this case is not the same the tail for the list * lu * than for the list * mix*. therefore the language itself it not the unique constraint in the network behavior although the social paradigm where the actors are involved .this also indicates that the cumulative probability would be considered as a qualification element of the social behavior in this kind of societies .in this work we concluded that the social relations among a set of identical actors is strongly linked with the social paradigm where they are involved . on the other hand , at least in the societies under analysis the tail behavior , allows to quantify their differences .we may speculate and think that the dissociative character of these societies may be attributed to the fact these are open societies instead of closed ones as _ small - world_. that means that no all the actors are related themselves by answering the mails and some actors cause the extinction of a theme by avoiding any close link .
|
social networks are analyzed as graphs under the scope of discrete mathematics which have a great range of applications in different contexts such as : technology , social phenomena and biological systems . at the present this theory gives a set of tools for a phenomenological analysis that would be difficult or almost impossible with a different approach . in this work social networks for different technical communities from electronic mail and `` news '' in spanish language are constructed . the algorithm was based on the use of rfc2822 standards and rfc1036 to arm threads of messages . the results are quite different from that obtained by another kind of community as the jazz musicians community . nevertheless they show an analogy to random graphs obtained by the `` configuration model '' method . this points the attention that some generalization assumptions are not correct .
|
wireless networks consisting of energy harvesting nodes continue to gain significance in the area of green communications . these networks _harvest _ energy from external sources in an intermittent fashion , and consequently require careful management of the available energy .there is considerable recent research on energy management for energy harvesting networks .reference considers an energy harvesting transmitter with energy and data arrivals , and an infinite size battery to store the harvested energy , and shows the optimality of a piecewise constant power policy for minimization of completion time of a file transfer . in , the throughput maximization problem is solved when the energy storage capacity of the battery is limited .it is shown that the transmission power policy is again piecewise constant , changing only when the battery is full or depleted .extension of the model in to fading channels is studied in where a directional water - filling algorithm is shown to yield the optimum transmission policy .reference also considers throughput maximization for a fading channel under the same assumption .the impact of degradation and imperfections of energy storage on the throughput maximizing policies is studied in . the single user channel with an energy harvesting transmitter and an energy harvesting receiver is considered in , and decoding and sampling strategies for energy harvesting receivers is considered in .various multi - user energy harvesting networks have also been studied to date ; including multiple access , broadcast , and interference channels with energy harvesting nodes .in addition to these multi - user setups , variations of the energy harvesting relay channel are studied in , including multiple energy harvesting relays . in this work ,we study the simplest network setup that embodies a cooperative communication scenario with two - directional information flow , with the goal of identifying design insights unique to such scenarios .this leads to the investigation of bi - directional communication with energy harvesting nodes .specifically , we study the so - called separated two - way relay channel with energy harvesting nodes .the channel is separated in the sense that the users can not hear each other directly , i.e. , communication is only possible through the relay .this model is relevant and of interest for peer - to - peer communications , or for any scenario where a pair of nodes exchange information , and avails the relay node to implement strategies to convey both messages simultaneously .the two - way relay channel ( twrc ) with conventional ( non - energy - harvesting ) nodes is studied with various relaying strategies such as amplify - and - forward , decode - and - forward , compress - and - forward , and compute - and - forward in half - duplex and full - duplex models .it is observed that different relaying schemes outperform the others for different ranges of transmit powers . in this paper , we identify transmission power policies for the energy harvesting two - way relay channel ( eh - twrc ) which maximize the sum - throughput .the energy harvesting relay can perform amplify - and - forward , decode - and - forward , compress - and - forward , or compute - and - forward relaying .due to intermittent energy availability , the channel calls for relaying strategies that adapt to varying transmit powers . for this purpose ,we introduce a relay that can dynamically change its relaying strategy , resulting in what we term hybrid relaying strategies .we derive the properties of the optimal offline transmission policy , where energy arrivals are known non - causally , with the goal of gaining insights into its structure .next , we show that an iterative generalized directional water - filling algorithm solves the sum - throughput maximization problem for all relaying strategies .we next find the optimal online transmission policy by formulating and solving a dynamic program , where the energy states of the nodes are known causally .we compute optimal policies for different relaying strategies and provide numerical comparisons of their sum - throughputs .our contribution includes generalization of directional water - filling to an interactive communication scenario with multiple energy harvesting terminals in the offline setting , as well as the identification of optimal policies in the online setting .the interactive communication scenario considered in this paper is the catalyst that can drastically change the resulting power allocation algorithms in the energy harvesting setting .the two - way relay channel is the simplest multi terminal network model that demonstrates this interaction , and hence is the model considered .we observe that the relaying strategy has a significant impact on the optimum transmission policy , i.e. , transmit powers and phase durations , and that hybrid relaying can provide a notable throughput improvement for the eh - twrc .the remainder of the paper is organized as follows .the system model is described in section [ sect_model ] . in section [ sect_hybrid ] ,a hybrid relaying scheme where the relay can alter its strategy depending on the instantaneous powers is introduced . in section [ sect_properties ] , the sum - throughput maximization problem is presented for an eh - twrc , and is divided into subproblems that can be solved separately . in section [ sect_identify ] ,the iterative generalized directional water - filling algorithm is proposed to find an optimal policy for the eh - twrc .the online policy based on dynamic programming is provided in section [ sect_online ] .numerical results are presented in section [ sect_numerical ] .the paper is concluded in section [ sect_conclusion ] .we consider an additive white gaussian noise ( awgn ) two - way relay channel with two source nodes , and , that convey independent messages to each other through a relay node .the two source nodes can not hear each other directly , hence all messages are sent through the relay .the channels to and from a source node are reciprocal , with power gains between nodes and and between nodes and .we consider the delay limited scenario , where the relay forwards messages as soon as they are received , and thus has no data buffer .the channel model is shown in figure [ fig_model ] . , .] all nodes , and are powered by energy harvesting .node , , harvests units of energy at time , and stores it in a battery of energy storage capacity .any energy in excess of the storage capacity of the battery is lost .the initial charge of the batteries are represented with , with by definition .the time between the and energy arrivals , referred to as the epoch in the sequel , is denoted by .we remark that the model does not require all nodes to harvest energy packets simultaneously ; but rather indicates that epochs are constructed as the intervals between any two energy arrivals .a node that is not receiving any energy at the harvest is set to have .the energy harvesting model is depicted in figure [ fig_energy_model ] .we consider a transmission session of epochs , with length , for which the energy harvesting profile consists of and for and . in epoch , , node , , allocates an average power for transmission , i.e. , a total energy of is consumed for transmitting .since the energy available to each node is limited by the energy harvested and stored in the respective battery , the energy harvesting profile determines the feasibility of for each node .specifically , the transmission powers satisfy where is the energy available to node at the beginning of epoch , which evolves as in this work , similar to references , the energy harvesting profile is known non - causally by all nodes , so that offline optimal policies and performance limits of the network can be found . ] . the communication overhead for conveying energy arrival information and power allocation decisions is considered to be negligible compared to the amount of data transferred in each epoch .we consider the problem of finding the power policy which maximizes the sum - throughput of the system under different relaying strategies such as decode - and - forward , compute - and - forward , compress - and - forward and amplify - and - forward . in the next subsection, we present the rate regions for these relaying strategies .we focus on a two - phase communication scheme , consisting of a multiple access phase from nodes and to and a broadcast phase from to and .this is referred to as multiple access broadcast ( mabc ) in .its three phase counterpart , time division broadcast ( tdbc ) , can be shown to perform no better than mabc in the absence of a direct channel between and , and is therefore omitted . for half - duplex nodes ,the rates achievable with decode - and - forward , compress - and - forward , amplify - and - forward and compute - and - forward relaying schemes are derived in .these works consider nodes that are constrained by their instantaneous transmit powers , and do not consider total consumed energy , which depends on the duration of multiple access and broadcast phases . since our model is energy - constrained , we revise the results of these work by scaling transmit powers with phase duration , thereby replacing instantaneous transmit powers with average transmit powers .we denote the set of rate pairs achievable with average transmit powers , and and multiple access phase duration as in the half - duplex case .the duration of the broadcast phase is . for full - duplex nodes , due to simultaneous multiple access and broadcast phases, there is no need for the time sharing factor ; we use to denote the achievable set of rate pairs . in this case, the full - duplex nodes can remove the self - interference term form the received signal , as in .we use a subscript to denote the relaying strategy where necessary . at .amplify - and - forward rates remain just below compress - and - forward and thus are not visible . ] * decode - and - forward : * in this scheme , the relay decodes the messages of both source nodes in the multiple access phase , and transmits a function of the two messages in the broadcast phase .nodes and use the broadcast message along with their own messages to find the ones intended for them . for half - duplex nodes , the rate region in epoch is defined by at each node .this is done by first scaling and to establish unit variance noise at nodes and , and subsequently scaling the transmit power , available energy and battery capacity at nodes and to yield a unit variance noise at . ] [ eqn_rates_hd_mabc ] where . with full - duplex radios ,the two phases take place simultaneously , achieving instantaneous rates which are found by substituting in ( [ eqn_rates_hd_mabc ] ) .* compress - and - forward : * in this scheme , the relay transmits a compressed version of its received signal in the broadcast phase .the instantaneous rates , , for the mabc half - duplex case satisfy [ eqn_rates_hd_cf ] for some and , where .the full - duplex rates are where , and [ eqn_rates_fd_cf_sigmas ] * amplify - and - forward : * in this scheme , the relay broadcasts a scaled version of its received signal .since this is performed on a symbol - by - symbol basis , the time allocated for multiple access and broadcast phases are equal .the rate regions are found as [ eqn_rates_fd_af ] by substituting for the half - duplex case and for the full - duplex case .* compute - and - forward ( lattice forwarding ) : * in this scheme , nested lattice codes are used at the source nodes , and the relay decodes and broadcasts a function of the two messages received from the sources .each source then calculates the intended message using the side information of its own .the rate region achievable with this scheme for an mabc half - duplex relay consists of rates satisfying [ eqn_rates_hd_lf ] where and .the full - duplex rate region can be evaluated by substituting in ( [ eqn_rates_hd_lf ] ) . in reference , it is shown that this strategy achieves within bits of twrc capacity in each epoch . at .amplify - and - forward rates remain just below compress - and - forward and thus are barely visible . ] it can be observed that the compute - and - forward rates are not jointly concave in transmit powers , .this implies that time sharing between two sets of transmit powers and with parameter , consuming average powers , , can yield rates . to include rates achievable as such, we concavify the rate region by extending to include all time - sharing combinations with average power , i.e. , which we refer to as the concavified rate region .this extends to the half - duplex relaying region by time sharing among as well . with a slight abuse of notation, we will denote the concavified regions with and in the sequel .we note that all rates in the concavified region are achievable via time - sharing within an epoch , while the average powers within said epoch , and hence energy constraints , hold by definition .a formal proof of this concavification follows ( * ? ? ?1 ) closely . in the sequel, we use the concavified region , though we do not reiterate the required time - sharing for clarity of exposition .since we are interested in maximizing sum - throughput , we compare the maximum achievable sum - rates for full- and half - duplex nodes employing the relaying strategies above in figures [ fig_comp1_fd ] and [ fig_comp1_hd ] , respectively . in these evaluations ,a symmetric channel model normalized to yield , and a fixed relay power of is considered .it can be observed that different schemes may outperform based on the instantaneous transmit power , and thus the selection of the correct relaying scheme is of importance in an energy harvesting setting where transmit powers are likely to change throughout the transmission .in section [ sub_model_rates ] , it is observed that depending on the transmit powers , either one of the relaying strategies may yield the best instantaneous sum - rate . due to the intrinsic variability of harvested energy ,transmit powers may change significantly throughout the transmission period based on the energy availability of nodes .consequently , a dynamic relay that chooses its relaying strategy based on instantaneous transmit powers of the nodes can potentially improve system throughput .another benefit of switching between relaying strategies is achieving time - sharing rates across strategies , e.g. , switching between decode - and - forward and compute - and - forward strategies within an epoch , which can outperform both individual strategies with the same average power .an example of the benefits of time - sharing in a two - way relay channel is reference , where time - sharing between different operation modes is considered . in , a fixed relaying strategyis employed with different nodes transmitting at a time ; while here we allow time - sharing between different relaying strategies . the rates achievable with a hybrid strategy switching between the four relaying schemes in figures [ fig_comp1_fd ] and [ fig_comp1_hd ] consist of the convex hull of the union of rate pairs achievable by the individual schemes .the rate region for the hybrid scheme is expressed as where , , and are the rate regions given in section [ sub_model_rates ] with decode - and - forward , compute - and - forward , compress - and - forward and amplify - and - forward , respectively . for the purpose of demonstration, we present the chosen relaying scheme that maximizes the instantaneous sum - rate for a half - duplex channel with fixed relay transmit power , , in figure [ fig_sims_hybrid ] .it can be observed that while decode - and - forward or compute - and - forward alone are chosen at the extremes , a time - sharing of the two strategies is favored in between . in this figure ,the regions where the hybrid scheme uses time - sharing are shown in two shades of blue .we note that for these channel parameters , the remaining relaying schemes under - perform these two for any choice of transmit powers . at .the labels `` over d&f '' and `` over lf '' denote which of the two strategies is better by itself in that region . ] with these observations , we conclude that policies with hybrid relaying strategies can instantaneously surpass the sum - rates resulting from individual relaying schemes for a considerable set of power vectors .furthermore , time - sharing between relaying strategies may strictly outperform the best relaying strategy alone .numerical results on the performance of optimal hybrid schemes in comparison with individual schemes are presented in section [ sect_numerical ] .we consider the problem of sum - throughput maximization for a session of epochs .since achievable rates are either jointly concave in transmit powers or can be concavified by the use of time sharing as in ( [ eqn_model_concavify ] ) , it follows that the optimal transmit powers remain constant within each epoch , as noted in ( * ? ? ?* lemma 2 ) .the power policy of the network consists of the power vectors , where , , and in the case of half - duplex relaying , the time sharing parameters , .for the set of feasible power policies , we first present the following proposition , which is the multi - user extension of ( * ? ? ? * lemma 2 ) : [ lem_overflow ] there exists optimal average transmit powers that do not yield a battery overflow at any of the nodes throughout the communication session .let be a vector of transmit powers yielding battery overflows , i.e. , for some and .for each battery overflow of amount at node at the end of epoch , let .for the remaining powers , let . the power policy defined by not overflow the battery at any time , and satisfies for all and .note that nodes consuming powers can achieve any rate pair that is achievable with less power , i.e. , [ eqn_prop_nondiminishing ] for full - duplex and half - duplex nodes with , respectively .therefore , the sum - rate obtained by at any epoch is no less than that of .hence , for any policy with battery overflows , we can find a policy performing at least as good without overflows .we remark that even though ( [ eqn_prop_nondiminishing ] ) does not hold immediately , e.g. , for the amplify - and - forward rates in ( [ eqn_rates_fd_af ] ) , it holds by definition for the concavified rates in ( [ eqn_model_concavify ] ) . by choosing and in ( [ eqn_model_concavify ] ), a portion of the allocated power can equivalently be discarded at the node .consequently , proposition [ lem_overflow ] applies to all concavified relaying schemes presented in section [ sub_model_rates ] . as a consequence of proposition [ lem_overflow ], we will restrict the feasible set of policies to those that do not overflow the battery without loss of generality . in epoch ,the nodes choose transmit powers , a time sharing parameter , and a rate pair in the case of half - duplex radios .the objective is to maximize the sum - throughput of the twrc within epochs , where the transmit powers are constrained by harvested energy and the rates are constrained by the rate region .we express the eh - twrc sum - throughput maximization problem [ eqn_model_prob_hd ] for half - duplex nodes , where , , , , and . here , ( [ eqn_model_hd_capacity ] ) is due to proposition [ lem_overflow ] , and ( [ eqn_model_hd_causality ] ) is equivalent to ( [ eqn_model_constraint ] ) given ( [ eqn_model_hd_capacity ] ) .while the rates are a function of the powers of the nodes and the time sharing parameters , this dependency is now deferred to ( [ eqn_model_hd_c1 ] ) , which is the constraint that ensures the rates are selected from the achievable region dictated by the power and time sharing parameters . the energy causality constraints given in ( [ eqn_model_hd_causality ] )ensure that the energy consumed by a node is not greater than the energy harvested up to that epoch .the no - overflow constraints given in ( [ eqn_model_hd_capacity ] ) ensure that the battery capacity is not exceeded .any power policy satisfying both ( [ eqn_model_hd_causality ] ) and ( [ eqn_model_hd_capacity ] ) for all and is considered a feasible power power policy .the problem for full - duplex nodes is attained by replacing ( [ eqn_model_hd_c1 ] ) with and omitting the time - sharing variables , .we next show that ( [ eqn_model_prob_hd ] ) can be decomposed by separating the maximization over , , and , , and the maximization over , , , , as [ eqn_model_prob_hd2 ] note that only the constraints in ( [ eqn_model_hd2_c1 ] ) pertain to the parameters of the second maximization .next , we observe that the constraints in ( [ eqn_model_hd2_c1 ] ) are separable in , and the objective is a linear function of and . hence , the second maximization can be carried out separately for each , i.e. , in an epoch - by - epoch fashion , yielding the separated problem [ eqn_model_prob_norate ] where is the solution to [ eqn_model_prob_hd_inner ] within a single epoch with fixed powers .this implies that the optimal transmit rates within each epoch are the sum - rate maximizing rates for the given transmit powers within that epoch .thus , we refer to the function as the _ maximum epoch sum - rate_. for full - duplex nodes , the maximum epoch sum - rate is found by solving [ eqn_model_prob_fd_inner ] instead , and the power policy optimization is identical to ( [ eqn_model_prob_norate ] ) .we next show a property of policies that solve the problem in ( [ eqn_model_prob_hd ] ) .[ lem_depleted_batteries ] there exists an optimal policy which depletes the batteries of all nodes at the end of transmission .let be a transmission policy which leaves energy in the battery of node at the end of transmission .consider the transmission policy which has , and equals the original policy elsewhere .hence , this policy expends the remaining energy in the battery of in the last epoch , depleting the batteries .we have for and , for , due to ( [ eqn_prop_nondiminishing ] ) .therefore , the sum - throughput of the new policy can not be lower than that of the original policy .now that we have formulated the problem and identified some necessary properties of the optimal policy , we next find the optimal power policy for the eh - twrc .we establish this using a generalization of the directional water - filling algorithm in , which gives the optimal policy for a single transmitter fading channel . in this section, we show the optimality of the generalized directional water - filling algorithm and verify its convergence . to find the optimal policy , we first find the maximum epoch sum - rate by solving ( [ eqn_model_prob_hd_inner ] ) and ( [ eqn_model_prob_fd_inner ] ) for half - duplex and full - duplex nodes , respectively . the following property of can be immediately observed for any relaying scheme .[ lem_sumrate_convex ] the maximum epoch sum - rate is jointly concave in transmit powers , , and .proof follows from the concavity of objectives ( [ eqn_model_hd_inner_objective ] ) and ( [ eqn_model_fd_inner_objective ] ) , and the convexity of constraint sets ( [ eqn_model_hd_inner_c1 ] ) and ( [ eqn_model_fd_inner_c1 ] ) .let and denote two feasible rate pairs , and and their time - sharing parameters for transmit powers and , respectively .let , , , and , denote the convex combination of the policies with parameter . then , for all relaying schemes , or follows either from the definition of the rate region , or from ( [ eqn_model_concavify ] ) . as a consequence of lemma[ lem_sumrate_convex ] , ( [ eqn_model_prob_norate ] ) is a convex program .we next provide the _ iterative generalized directional water - filling algorithm _ to compute the optimal power policy . consider the power allocation problem in ( [ eqn_model_prob_norate ] ) for an arbitrary relaying scheme with the maximum epoch sum - rate . here, the constraints in ( [ eqn_model_prob_norate_causality ] ) and ( [ eqn_model_prob_norate_capacity ] ) are separable among .hence , a block coordinate descent algorithm , i.e. , alternating maximization , can be employed . in each iteration , the power allocation problem for node , , given by [ eqn_kkt ] is solved while keeping the remaining power levels , , constant .this is a convex single user problem , and the solution satisfies the kkt stationarity conditions and complementary slackness conditions [ eqn_rates_gidwf_iter ] for all where , and are the lagrange multipliers for energy causality , battery capacity and transmit power non - negativity constraints , respectively .hence , the optimal transmit power policy for , i.e. , , is the solution to for all which follows from ( [ eqn_iterative_stationarity ] ) .note that due to ( [ eqn_iterative_cs1 ] ) and ( [ eqn_iterative_cs2 ] ) , the lagrange multipliers are nonzero only when the respective constraints are met with equality .we argue that the solution to ( [ eqn_water - level2 ] ) can be interpreted as a generalization of the directional water - filling algorithm similar to the case in . in ,optimal transmit powers are found by treating the available energy in each epoch as water , and letting water levels equalize by flowing in the forward direction only .the associated algorithm is termed directional water - filling . here, we instead define the _ generalized water levels _ for as the following properties of are readily observed for the optimal policy : ( a ) while , the water levels remain constant among epochs unless the battery is empty or full , increasing only when the battery is empty , and decreasing only when the battery is full , and ( b ) if a positive solution to does not exist , then and .these properties imply that the optimal policy can be found by performing directional water - filling using the generalized water levels in ( [ eqn_water - level ] ) , and calculating the corresponding transmit powers .water flow is only in the forward direction and the corresponding energy flow is bounded by for node .hence , the flow between two neighboring epochs stops when water levels in ( [ eqn_water - level ] ) are equalized or when the total energy flow reaches .the initial water levels are found by substituting the initial transmit powers in ( [ eqn_water - level ] ) .this algorithm yields transmit powers that satisfy the two properties above by construction .an example of generalized directional water - filling is depicted in figure [ fig_wf ] . with epochs .note that the battery of the node is full at the end of the 5th epoch , preventing further energy flow into the 6th epoch . ]the _ iterative _ generalized directional water - filling ( igdwf ) algorithm employs generalized directional water - filling sequentially for each user until all power levels , , converge , i.e. , alternating maximization .although optimization is carried on separately for a single user at each iteration , the transmit powers of all users interact through the generalized water levels in ( [ eqn_water - level ] ) .starting from the initial values , the iteration of the algorithm , optimizing for , is given in algorithm [ alg_1 ] .at each iteration of the igdwf algorithm , the water flow out of each of the epochs can be found using a binary search .this requires updating at most water levels following each epoch .hence , the computational complexity of each iteration is , i.e. , quadratic in the number of epochs .\1 ) let , , for , , .\2 ) * for * , * do * find the set , * if * and , * then * assign , find and assign , , such that is minimized . * end for * \3 ) repeat 2 until or for all . for the alternating maximization in section [ sub_solution ] to converge to an optimal policy , it is sufficient that the feasible set is the intersection of convex constraints that are separable among , and the continuously differentiable objective yields a unique maximum in each iteration ( * ? ? ?2.7.1 ) . in this case , the objective in ( [ eqn_model_prob_norate_objective ] ) is concave and continuously differentiable for all relaying strategies , with compute - and - forward satisfying this condition after the concavification in ( [ eqn_model_concavify ] ) .the feasible set ( [ eqn_model_prob_norate_causality])-([eqn_model_prob_norate_capacity ] ) is separable among as well .however , the objective does not necessarily yield a unique maximum at each iteration since it is not strictly concave in transmit powers . to overcome this , we introduce the unconstrained variables for , and modify the objective in ( [ eqn_iterative_objective ] ) as where , are arbitrarily small parameters . the objective in ( [ eqn_new_objective ] )is maximized by a unique in each iteration with or .the iterations optimizing trivially yield the unique solution .therefore , the problem now satisfies the convergence property for alternating maximization , and converges to the global maximum of ( [ eqn_model_prob_norate ] ) ( * ? ? ?2.7.2 ) . note that through ( [ eqn_new_objective ] ) and the arbitrarily small , we essentially introduce _ resistance _ to the iterative algorithm .that is , if the original objective in ( [ eqn_model_prob_norate_objective ] ) yields multiple solutions for some , the objective in ( [ eqn_new_objective ] ) has a unique solution that is closest to the previous value of .consequently , if there exists more than one optimal solution to ( [ eqn_kkt ] ) at one of the iterations for some , the power policy that is closest to the previous one is chosen .this is ensured by choosing the flow amount which minimizes in step 2 of algorithm [ alg_1 ] .the power allocation policy we have considered so far is an offline policy , in the sense that the energy harvest amounts and times are known to all nodes in advance .although the offline approach is useful for predictable energy harvesting scenarios and as a benchmark , it is also meaningful to develop policies that only rely on past and current energy states , i.e. , causal information only .we refer to such transmission policies as _ online _ policies .recent efforts that consider online algorithms for energy harvesting nodes in various channel models include .building upon the previous work , in this section , we find the optimal online policy for power allocation in the two - way relay channel .the epoch length indicates that no energy will be harvested for a duration of after the energy arrival .therefore , in the online problem , the epoch lengths are not known by the nodes causally .instead , we divide the transmission period into time slots of length , and recalculate transmit powers at the beginning of each time slot .we assume that each energy harvest takes place at the beginning of some time slot .note that with smaller , this model gets arbitrarily close to the general model in section [ sect_model ] .we assume that harvests in time slot are independent and identically distributed . in time slot , nodes , and have access to previous energy harvests , and .the nodes decide on transmit powers , through _ actions _ where denotes all energy arrivals prior to , and including , time slot .each time slot with transmit powers contribute to the additive objective through the sum - rate function in ( [ eqn_model_prob_hd_inner ] ) and ( [ eqn_model_prob_fd_inner ] ) for full - duplex and half - duplex modes , respectively .we consider the problem of finding the optimal set of actions for this setting , which can be formulated as the following dynamic program : .\end{aligned}\ ] ] here is the number of time slots and ] for , with unit epoch lengths s. the noise density is w / hz at all nodes and the bandwidth is mhz .examples for the optimal transmit power policies found using the algorithm described in section [ sect_identify ] are shown in figures [ fig_subgradient_fd2][fig_giwftunnel_df ] for decode - and - forward relaying . in each figure ,cumulative energy consumed by the nodes for transmission are plotted , the derivative of which yields the average transmit powers of the nodes in each epoch . in the figures, & stands for the total cumulative energy of the nodes and , and mac fraction represents the fraction of the multiple access phase , i.e. , .we remark that concavified sum - rate functions are used for the simulations , and average transmit powers are shown in the plots for clarity .pairs of staircases , shown in red and green , represent energy causality and battery capacity constraints on the cumulative power , which is referred to as the feasible energy tunnel .a feasible policy remains between these two constraints throughout the transmission period .figures [ fig_subgradient_fd2 ] and [ fig_subgradient_fd1 ] are plotted for full - duplex nodes while figures [ fig_subgradient_hd2 ] and [ fig_giwftunnel_df ] are plotted for half - duplex nodes .both scenarios are considered for an asymmetric eh - twrc with in figures [ fig_subgradient_fd2 ] and [ fig_subgradient_hd2 ] , and for a symmetric eh - twrc with in figures [ fig_subgradient_fd1 ] and [ fig_giwftunnel_df ] .+ + + + we remark that unlike previous work with simpler channel models , e.g. , , the optimal cumulative energy or sum - power policy is not necessarily the shortest path that traverses the feasible tunnel .figure [ fig_subgradient_fd2 ] shows a setting with mj and mj , i.e. , the relay is energy deprived compared to and .hence , energy efficiency is critical for the relay while this is not necessarily the case for the remaining nodes that are relatively energy - rich .this results in the optimal policy being largely dictated by the relay .note that in figure [ fig_subgradient_fd2 ] , the relay follows a cumulative energy that resembles the shortest path through the feasible energy tunnel , while for and this is not the case .in contrast , in figure [ fig_subgradient_fd1 ] , the multiple access phase is more likely to be limiting because the sum - rate with equal transmit powers at all nodes is limited by the sum - rate constraint of the multiple access phase , see ( [ eqn_rates_hd_mabc_sum ] ) .thus , the total cumulative energy , denoted with & in figure [ fig_subgradient_fd1](a ) , follows the shortest path within the tunnel , similar to the optimal policy for the multiple access channel in . however , broadcast powers do not yield binding constraints , implying that contrary to the energy harvesting models previously studied , e.g. , , the optimal policy for the eh - twrc is not necessarily unique . comparing figures [ fig_subgradient_hd2 ] and [ fig_giwftunnel_df ] , which show optimal policies for the half - duplex model , we observe that the time division parameters play an important role in helping energy deprived nodes . by properly selecting ,the effect of unbalanced energy harvests at the sources and the relay can be mitigated .however , this still does not imply the shortest path is optimal for each node .this is due to the interplay of transmit powers though the joint rate function in the objective .hence , whenever the transmit power changes for one user due to a full battery or an empty battery , the transmit powers of other users are affected as well .examples to this phenomena can be found in figure [ fig_subgradient_hd2 ] , at s , where the energy depletion in is observed to affect the transmit powers of and , and in figure [ fig_giwftunnel_df ] , at s , where the energy depletion in and is observed to affect the transmit power of .similar results were observed for compress - and - forward , compute - and - forward , and amplify - and - forward relaying through simulations .we observed that identical energy harvesting profiles and channel parameters yield transmit powers that only differ slightly among relaying schemes .however , the multiple access phase fractions , , differ notably among relaying schemes in order to achieve matching multiple access and broadcast rates within each epoch . due to the similarity of the transmit power policies , to avoid repetition , we omit the plots for these schemes .next , we compare the performance of the optimal offline and online policies with upper and lower bounds for a decode - and - forward relay .we obtain a non - energy - harvesting upper bound by providing the total energy harvested by each node at the beginning of the transmission without a battery restriction .we also present two nave transmit power policies , namely the hasty policy and the constant power policy , as lower bounds .the former policy , also referred to as the spend - as - you - get algorithm , consumes all harvested energy immediately within the same epoch .the latter policy chooses the average harvest rate at each node as the desired transmit power , and transmits with this power whenever possible . for both nave policies , the phase fraction parameters that maximize the instantaneous sum - rate for the given transmit powers are chosen within each epoch .we consider a half - duplex eh - twrc with db , and choose the energy harvests for node to be independent and uniformly distributed over $ ] where mj and mj are the peak harvest rates .the infinite horizon online policy is found using a discount factor of .the sum - throughput values resulting from these policies with a half - duplex relay in epochs , averaged over independently generated scenarios , are plotted in figure [ fig_comparison ] . in the figure , the peak harvest rate for node , , is varied in order to evaluate the performance of the policies at different harvesting rate scenarios .we observe that the optimal online policy , found for a horizon of epochs , as well as its infinite horizon counterpart perform notably better than the nave policies . .the compress - and - forward and amplify - and - forward strategies and omitted since they perform notably worse than those in the plot . ]finally , we compare the sum - throughput resulting from decode - and - forward , compute - and - forward , compress - and - forward , amplify - and - forward , and hybrid strategies in an eh - twrc .the same parameters as in figure [ fig_comparison ] are used in simulations .the sum - throughput values obtained over a duration of epochs are plotted in figure [ fig_sims_hybrid_rates ] .we observe that for low and high transmit powers , either decode - and - forward or compute - and - forward outperforms the other , respectively , while they both exceed the sum - throughput values of compress - and - forward and amplify - and - forward relaying .however , as expected , the hybrid strategy outperforms all single - strategy approaches , since it performs at least as good as the best one in each epoch .in this paper , we considered the sum - throughput maximization problem in a two - way relay channel where all nodes are energy harvesting with limited battery storage , i.e. , finite battery .we considered decode - and - forward , compress - and - forward , compute - and - forward and amplify - and - forward relaying strategies with full - duplex and half - duplex radios . noticing that the best relaying strategy depends on instantaneous transmit powers , we proposed a hybrid relaying scheme that switches between relaying strategies based on instantaneous transmit powers .we solved the sum - throughput maximization problem for the eh - twrc using an iterative generalized directional water - filling algorithm . for cases where offline information about energy harvests is not available , we formulated dynamic programs which yield optimal online transmit power policies .simulation results confirmed the benefit of the hybrid strategy over individual relaying strategies , and the improvement in sum - throughput with optimal power policies over nave power policies .the online policies found via dynamic programming also proved to perform better than their nave alternatives .it was observed that in a two - way channel with energy harvesting nodes , either of the communication phases , i.e. , broadcast or multiple access phases , can be limiting , impacting the optimal transmit powers in the non - limiting phase as well .thus , the jointly optimal policies were observed not to be the throughput maximizers for each individual node , or the sum - throughput maximizers for a subset of nodes - a fundamental departure in the structure of optimal policies in previous work .we remark that the offline throughput maximization problem for the full - duplex and half - duplex cases when decode - and - forward relaying is used can also be solved using the subgradient descent algorithm as shown in .future directions for this channel model include optimal offline and online power policies for more involved models with data arrivals at the sources , data buffers at the relay , or a direct channel between sources .k. tutuncuoglu , b. varan , and a. yener .optimum transmission policies for energy harvesting two - way relay channels . in _icc workshop on green broadband access : energy efficient wireless and wired network solutions _ , june 2013 .h. liu , f. sun , c. thai , e. de carvalho , and p. popovski .optimizing completion time and energy consumption in a bidirectional relay network . in _ieee international symposium on wireless communication systems , iswcs _ , august 2012 .m. b. khuzani , h. e. saffar , e. h. m. alian , and p. mitran . on optimal online power policies for energy harvesting with finite - state markov channels . in _ieee international symposium on information theory , isit _ , july 2013 .m. gorlatova , a. bernstein , and g. zussman .performance evaluation of resource allocation policies for energy harvesting devices . in _ international symposium on modeling and optimization in mobile , ad - hoc and wireless networks , wiopt _ , may 2011 .k. tutuncuoglu , b. varan , and a. yener .energy harvesting two - way half - duplex relay channel with decode - and - forward relaying : optimum power policies . in _ieee international conference on digital signal processing , dsp _ , july 2013 .
|
in this paper , we study the two - way relay channel with energy harvesting nodes . in particular , we find transmission policies that maximize the sum - throughput for two - way relay channels when the relay does not employ a data buffer . the relay can perform decode - and - forward , compress - and - forward , compute - and - forward or amplify - and - forward relaying . furthermore , we consider throughput improvement by dynamically choosing relaying strategies , resulting in hybrid relaying strategies . we show that an iterative generalized directional water - filling algorithm solves the offline throughput maximization problem , with the achievable sum - rate from an individual or hybrid relaying scheme . in addition to the optimum offline policy , we obtain the optimum online policy via dynamic programming . we provide numerical results for each relaying scheme to support the analytic findings , pointing out to the advantage of adapting the instantaneous relaying strategy to the available harvested energy . energy harvesting nodes , two - way relay channel , decode / compute / compress / amplify - and - forward , hybrid relaying strategies , throughput maximization .
|
in wireless communications , fading is a major factor that deteriorates the quality of signal transmission .many methods have been proposed to mitigate the effect of fading .of particular interest are the multi - antenna technologies which can provide high diversity gain and spatial multiplexing gain .recently , a wealth of research has investigated the interplay between forward error correction ( fec ) and spatial diversity . as a type of superior fec code, low - density parity - check ( ldpc ) codes are known to perform near the shannon limit over the additive white gaussian noise ( awgn ) channel . however , ldpc codes that perform well in awgn channels may not do so in a fading environment . to overcome this weakness ,ldpc codes have been studied in fading environments and capacity - approaching ldpc codes have also been designed for single - input multiple - output ( simo ) channels and multiple - input multiple - output ( mimo ) channels using density evolution and extrinsic information transfer ( exit ) function , respectively .many of the capacity - approaching ldpc codes , however , are irregular and hence suffer from two main drawbacks high error floor and nonlinear encoding .recently , some research has successfully reduced the error floor of short - block - length ldpc codes which can then attain outstanding performance down to an error rate of .meanwhile , the quasi - cyclic ldpc codes that permit linear encoding have been proposed for mimo channels .in addition , a novel class of ldpc code , namely multi - edge type ( met ) ldpc code , has been introduced .as one subclass of the met - ldpc code , the protograph - based ldpc code has emerged as a promising fec scheme due to its excellent error performance and low complexity .two types of protograph codes , namely accumulate - repeat - accumulate ( ara ) code and accumulate - repeat - by-4-jagged - accumulate ( ar4ja ) code , which can realize linear encoding and decoding have been proposed by jet propulsion laboratory ( jpl ) . in ,a protograph exit ( pexit ) algorithm has been introduced to predict the threshold of protograph codes over the awgn channel . while the protograph codes have further been studied under rayleigh fading channels , to the best of our knowledge , littleis known about the analytical performance for protograph codes under fading conditions and antenna diversity has never been considered . in this paper , we aim to investigate the performance of the protograph codes over a simo rayleigh fading channel . to do so, we propose a modified pexit algorithm for analyzing the protograph ldpc code over a fading environment .we then analyze the decoding thresholds of the accumulate - repeat - by-3-accumulate ( ar3a ) code , the ar4ja code , the regular ldpc code and two optimized irregular ldpc codes and use the thresholds to predict the error performance of the codes .the results show that the irregular ldpc codes outperform the other codes in the low signal - to - noise - ratio ( snr ) region .however , in the high - snr region , the ar3a code possesses the best error performance .besides , we also study the performance of the ar3a code and the ar4ja code with different diversity orders based on ( i ) the mean and variance of the log - likelihood - ratio ( llr ) values , ( ii ) the pexit analysis , and ( iii ) the bit - error - rate ( ber ) simulations .we find that the additional gain becomes smaller as we increment the diversity order and hence increase the system complexity .we organize the remainder of this paper as follows . in section[ sect : review ] , we present the system model over the simo fading channels and in section [ sect : pexit ] , we describe our modified pexit algorithm for analyzing protograph codes in the fading environments . in section[ sect : analysis ] , we analyze the decoding threshold and the initial llr distribution of two conventional protograph codes .we show the simulation results in section [ sect : sim_dis ] and finally we give the concluding remarks in section [ sect : conclusion ] .the system model being considered in this paper is illustrated in fig .[ fig : fig.1 ] . referring to this figure ,the information bits ( info bits ) are firstly encoded by the ( punctured ) protograph ldpc code .then the binary coded bits are passed to a binary - phase - shift - keying ( bpsk ) modulator , the output of which is given by .the modulated signal is further sent through a simo fading channel with one transmit antenna and receive antennas .we denote as a channel realization vector of size , the entries of which are complex independent gaussian random variables with zero - mean and variance , i.e. , , per dimension .moreover , is assumed to be independent over time .we define as a complex awgn vector with zero - mean and covariance matrix where is the expectation operator , denotes the transpose conjugate operator , and represents the identity matrix .then , the receive signal vector , denoted by , is given by note that up to now , the time index has been omitted for clarity .at the receiver , we assume that the received signals in are combined using the maximum - ratio - combining ( mrc ) method or the equal - gain - combining ( egc ) method . afterwards , the combined signals are sent to the ldpc decoder for finding the valid codewords .we do not consider interleaving in our system .moreover , we assume that ( i ) the channels formed by different transmit - receive antenna pairs are independent and ( ii ) each receive antenna possesses perfect channel state information ( csi ) which is changing sufficiently rapidly to satisfy ergodicity .the capacity of an ergodic simo channel is further given by where is the average energy per transmitted symbol and is the determinant operator .this capacity can be evaluated by using monte carlo simulation , i.e. , by generating a large number of independent channel realizations and computing their average capacity value . in practical applications, we should choose a modulation technique and a code rate such that the product of the modulation order and the code rate equals the channel capacity , i.e. , .for bpsk modulation , and hence we have .moreover , the definition of ( is the average energy per information bit ) used in this paper is the same as that used in , i.e., conventional exit function has been proposed to better trace the convergence behavior of the iterative decoding schemes and to efficiently estimate the thresholds of different codes .however , it is known not to be applicable to the protograph codes . in ,a protograph exit ( pexit ) algorithm has been introduced to facilitate the analysis and design of protograph codes over the awgn channel . in the following ,we illustrate that the pexit algorithm , which works well on the awgn channel , is no longer applicable to a simo rayleigh fading channel. then we modify the pexit algorithm for such a channel and use it for analyzing the protograph codes in our system .one important assumption of the proposed pexit algorithm in is that the channel log - likelihood - ratio ( llr ) messages should follow a symmetric gaussian distribution . in the following ,we briefly illustrate that this assumption can not be maintained in the case of a simo rayleigh fading channel and then we elaborate how to apply the pexit algorithm in such an environment . to simplify the analysis, we assume that the all - zero codeword is transmitted . by using to indicate the coded bit number and to indicate the receive antenna number , the signal of the coded bit at the receive antennacan be written as = h_j [ k ] x_j + n_j [ k ] . \ ] ] the combiner output corresponding to the coded bit , denoted by , is then given by r_j [ k ] & ~~\mbox{for mrc } \vspace{0.2 mm } \\\displaystyle \sum_{k=1}^{n_{\rm r } } \frac{h_j^\ast [ k ] } { |h_j [ k]| } r_j [ k ] & ~~\mbox{for egc } \end{array}\right.\ ] ] where denotes the complex conjugate , represents the modulus operator and /| h_j [ k]| ] ( the superscript `` '' represents the transpose operator ) . to evaluate the performance of the two combiners , we examine the distribution of by exploiting monte carlo simulations .we use a rate- ar3a code with an information length per code block of .moreover , we consider a simo rayleigh fading channel with and db . by sending repeatedlywhile varying the channel fading vector from bit to bit , we evaluate the mean of the absolute value of .we observe that the mrc produces an average value of , i.e. , whereas the egc gives .moreover , both combiners produce channel llr values with almost the same variance is given by = { \mathbb e}[(z- { \mathbb e}(z))(z- { \mathbb e}(z))^*] ] . in fig .[ fig : fig.2 ] , we further plot the probability density function ( pdf ) of the values ( denoted by ) when mrc is used .we also define and and plot the pdf of the gaussian distribution in the same figure for comparison .the curves in the figure indicate that the pdf of the values does not agree with the pdf of , which further suggests that the values do not follow a symmetric complex gaussian distribution .we conclude that the pexit algorithm in is not applicable to this type of channel . in the following ,we analyze the distribution of the values when the channel realization is fixed .values and over the simo rayleigh fading channel ., width=288,height=240 ] we consider a fixed channel realization , i.e. , a fixed channel fading vector .we assume using the all - zero codeword ( i.e. , ) and we substitute into .then , we can rewrite the expression for as \left ( h_j [ k ] x_j + n_j [ k ] \right ) \notag\\ & = \frac{2 } { \sigma_n^2 } \sum_{k=1}^{n_{\rm r } } \left ( \left| h_j [ k]\right|^2 + h_j^\ast [ k ] n_j [ k ] \right).\label{eq : l_oj_extention}\end{aligned}\ ] ] as n_j^\ast [ k]\right ) = \sigma_n^2 ], we can calculate the corresponding _ channel factor _ using , i.e. , \right|^2 ] of dimension to represent the blocks of channel factors , i.e. , each row in represents a group of channel factors for the variable nodes in the protograph .we also select an initial ( in db ) which should be sufficiently small .[ it : i_ch ] for and , we set the initial to .we also reset the iteration number to .considering the punctured label and substituting into , for the channel factor ( and ) , the corresponding variance of the initial llr value ( denoted by ) is given by 3 .[ it : i_ev ] if , set db and go to step [ it : i_ch ] ; otherwise , for ; and , we calculate output extrinsic mi sent by to for the fading block using ^ 2 \right ) + ( b_{i , j } - 1 ) [ j^{-1 } ( i_{av}(i , j))]^2 + \sigma_{ch , q , j}^2 } \ ; \right ) \label{eq : i_ev}\ ] ] 4 . for and , we obtain the expected value of using = \frac{1}{q } \sum_{q=1}^{q } i_{ev , q}(i , j ) .\label{eq : averagei_ev}\ ] ] then , the _ a priori _ mi between the input llr of on each of the edges and the corresponding coded bit is evaluated using .\ ] ] 5 . for and , we compute the output extrinsic mi sent by to using ^ 2 \right ) + ( b_{i , j } - 1 ) [ j^{-1 } ( 1 - i_{ac}(i , j))]^2}\right ) \right ) \label{eq : i_ec}\ ] ] then , we get the _ a priori _ mi between the input llr of on each of the edges and the corresponding coded bit using 6 . for and , we compute the _ a posteriori _ mi of using ^ 2 \right ) + \sigma_{ch , q , j}^2}\;\right ) .\label{eq : i_app}\ ] ] then , for every , we can evaluate the expected value of using = \frac{1}{q } \sum_{q=1}^{q } i_{app , q}(j ) .\label{eq : averagei_app}\ ] ] 7 .if the expected mi values = 1 ] can be obtained based on the different values of .the average quantity ] can be obtained based on the different values of . to ensure the accuracy of the modified pexit algorithm, we should generate a sufficiently large number of blocks of channel factors , i.e. , a large value for . in this paper , we use .protograph codes not only enable linear encoding and decoding to be implemented easily , but also have superior error performance over the awgn channel .as two typical ldpc codes constructed by protographs , the ar3a code and the ar4ja code possess excellent performance in the waterfall region and the error floor region , respectively , over the awgn channel .the corresponding base matrices of the ar3a code and the ar4ja code with a code rate are denoted by and , respectively , where we assume that the column of the matrix corresponds to the variable node and the row of the matrix corresponds to the check node . note that the variable nodes corresponding to the second columns in and are punctured . using the modified pexit algorithm proposed in sect .[ sect : modified pexit ] , we firstly investigate the decoding thresholds of the ar3a and ar4ja codes with a code rate of ( i.e. , when . for comparison , the regular ldpc code and the two optimized irregular ldpc codes ( denoted as irregular ldpc code a and irregular ldpc code b ) in are used to gauge the performance .moreover , the degree distribution pairs of the irregular codes are given as and as shown in , irregular code a and irregular code b are optimized for the diversity orders of and , respectively .for our system , the diversity order equals .table [ tab : thre - compare ] shows the decoding thresholds and the _ capacity gaps _ of these four codes over simo rayleigh fading channels with two different diversity orders .results in this table indicate that the decoding thresholds of irregular code a and irregular code b are smallest over such channels with the diversity order and , respectively .these small thresholds suggest that the irregular codes should possess relatively better performance in the low - snr region .however , the irregular codes may be outperformed by other codes in the high - snr region because of the error - floor issue .also , the results demonstrate that the regular ldpc code possesses the highest threshold .hence , the regular code is expected to be inferior in the low - snr region . between the ar3a code and the ar4ja code ,it is further observed that the ar3a code has lower thresholds for both the cases of and .[ cols="^,^,^,^,^,^,^,^,^,^ " , ] [ tab : mean - var ]in this section , we simulate the error performance of the ar3a code , the ar4ja code , the regular code , and the optimized irregular ldpc codes in over simo rayleigh fading channels .we also discuss the influence of the diversity order on the error performance of the protograph codes .the code rate and the information length of each code block are and , respectively .we terminate the simulations after bit errors are found at each . also , the ldpc decoder performs a maximum of bp iterations for each code block .[ fig : fig.3 ] figure [ fig : fig.3 ] plots the bit error rate ( ber ) and frame error rate ( fer ) curves of the codes over the simo rayleigh fading channels with diversity orders and .it can be seen that the ar4ja code and the regular code are the two worst - performing codes for the two diversity orders . moreover , referring to fig .[ fig : fig.3](a ) , at a ber of , the irregular code a has a gain of about db over the ar3a code , which remarkably outperforms the other two codes . yet, the ber and the fer performance of the irregular ldpc codes has little improvement when the exceeds db upon which the error floor emerges . in the same figure, we observe that the ar3a code has excellent error performance for the range of under study .for instance , at db , the ar3a code accomplishes a ber of , while the irregular code , ar4ja code , and the regular code achieve bers of , , and , respectively .we also observe that at a ber of , the ar3a code has a performance gain of db over the irregular ldpc code and the ar4ja code , which are superior to the regular code .moreover , a large gain can be expected at a lower ber .similar conclusions can be drawn from fig .[ fig : fig.3](b ) , where the diversity order equals . in general , among the four types of codes ,we conclude that the optimized irregular codes possess the best error performance in the low - snr region while the ar3a code can provide excellent error performance in the high - snr region over the simo rayleigh fading channels .figure [ fig : fig.4 ] shows the ber results of the ar3a code and the ar4ja code for various diversity orders ( , , , and ) .as can be seen from this figure , the ar3a code outperforms the ar4ja code by more than db at a ber of for all diversity orders .we also observe that for a fixed ber , the required decreases as the diversity order increases .however , the rate of decrease is reduced with the diversity order .for example , we consider the ar4ja code at a ber of . the required decreases from db to db , db and db , respectively , as from to and .similar observations are found for the ar3a code .furthermore , these observations agree well with the analytical results found in section [ sect : analysis ] . on the other hand ,the system complexity increases with the diversity order .accordingly , one should appropriately select the number of the receive antennas for a practical system so as to make a good balance between system performance and implementation complexity ., , , and .,width=268,height=211 ]in this paper , we have studied the performance of protograph - based ldpc codes for simo systems under fading channel conditions .we have proposed a modified pexit algorithm for analyzing such systems equipped an mrc combiner .we have also used the proposed algorithm to evaluate the decoding threshold of the protograph codes and hence to analyze their error performance .furthermore , we have compared the decoding thresholds and the distribution of the initial llr values of the protograph codes among different diversity orders .we conclude that while the error performance of the protograph codes improves with the diversity order , the rate of improvement is reduced as the diversity order becomes larger .thus , we have to strike a balance between code performance and system complexity when determining the diversity order to be implemented .our simulation results have also shown that the ar3a code is able to provide a significant gain over the ar4ja code , the regular ( 3 , 6 ) code , and the optimized irregular ldpc codes in the high - snr region . in the future , we will strive to optimize the protograph codes in such systems .furthermore , we will explore extending the modified pexit algorithm to other systems such as the mimo fading systems .chung , j. forney , g.d ., t. richardson , and r. urbanke , `` on the design of low - density parity - check codes within 0.0045 db of the shannon limit , '' _ ieee commun_ , vol . 5 , no . 2 ,5860 , feb . 2001 .b. s. tan , k. h. li , and k. c. teh , `` performance analysis of ldpc codes with selection diversity combining over identical and non - identical rayleigh fading channels , '' _ ieee commun ._ , vol . 14 , no . 4 , pp. 333335 , apr .2010 .chen , r. peng , a. ashikhmin , and b. farhang - boroujeny , `` approaching mimo capacity using bitwise markov chain monte carlo detection , '' _ ieee trans ._ , vol .58 , no . 2 ,423428 , feb .a. serener , b. natarajan , and d. gruenbacher , `` lowering the error floor of optimized short - block - length ldpc - coded ofdm via spreading , '' _ ieee trans ._ , vol .57 , no . 3 , pp .16461656 , may 2008 .a. shah and a. haimovich , `` performance analysis of maximal ratio combining and comparison with optimum combining for mobile radio communications with cochannel interference , '' _ ieee trans ._ , vol .49 , no . 4 , pp .14541463 , jul .2000 .r. novak and w. a. krzymien , , `` diversity combining options for spread spectrum ofdm systems in frequency selective channels , '' in _ proc .ieee wireless commun . and netw .2005 , vol .1 , pp . 308314 .
|
in wireless communications , spatial diversity techniques , such as space - time block code ( stbc ) and single - input multiple - output ( simo ) , are employed to strengthen the robustness of the transmitted signal against channel fading . this paper studies the performance of protograph - based low - density parity - check ( ldpc ) codes with receive antenna diversity . we first propose a modified version of the protograph extrinsic information transfer ( pexit ) algorithm and use it for deriving the threshold of the protograph codes in a single - input multiple - output ( simo ) system . we then calculate the decoding threshold and simulate the bit error rate ( ber ) of two protograph codes ( accumulate - repeat - by-3-accumulate ( ar3a ) code and accumulate - repeat - by-4-jagged - accumulate ( ar4ja ) code ) , a regular ldpc code and two optimized irregular ldpc codes . the results reveal that the irregular codes achieve the best error performance in the low signal - to - noise - ratio ( snr ) region and the ar3a code outperforms all other codes in the high - snr region . utilizing the theoretical analyses and the simulated results , we further discuss the effect of the diversity order on the performance of the protograph codes . accordingly , the ar3a code stands out as a good candidate for wireless communication systems with multiple receive antennas . channel state information ( csi ) , extrinsic information transfer ( exit ) algorithm , protograph - based ldpc code , receive diversity , single - input multiple - output ( simo ) system .
|
experimental time series are always blurred with additive noise coming from a variety of sources .additive noise further complicates time series analysis , leading to ambiguous interpretations of the basic quantities which characterise the dynamics , like correlation dimension and lyapunov exponents .in particular it may lead unreachable the goal of differentiating deterministic from stochastic dynamics . herean attempt is made to discriminate additive noise ( an ) from the dynamical noise ( dn ) the system may have . note that the term _ additive _ used here denotes a component of noise that is superimposed to the underlying `` clean '' signal , due to the measurement process .common synonyms found in the literature are _ measurement _ or _ observational _ noise .it should not be confused with the term used in the context of stochastic processes ( see , for istance , ) where it often indicates a coordinate - independent stochastic forcing as opposite to _ multiplicative _ noise , for which the amplitude of the random terms depends on the coordinates themselves .here we do not focus on this last distinction , and we will denote generically as _ dynamical _ every noisy mechanism intrinsic to the system . our method is based upon a fundamental , topological , property of deterministic systems , namely , the differentiability of the measure along the reconstructed trajectory or , more specifically , the continuity of its logarithmic derivative . starting from this basis we show that noise ( additive or dynamical ) decrease the differentiability of the measure .this would in principle hinder the possibility of differentiating the two types of noise. then we look at the same property ( continuity ) of the coordinate .now , while an destroys continuity of the coordinate , the latter is scarcely affected by dn .thus , a low continuity of both the coordinate and ( the log derivative of ) the reconstructed measure would indicate the presence of an , while if continuity is high for the coordinate and low for the measure the system contains dn only ( see following section ) .the method , however , does nt differentiate sharply a system with the two types of noise from one having solely an .notwithstanding , we believe that the present approach is a significant step forward in the understanding of determinism and stochasticity . to illustrate how the method works we consider the simple case of the van der pol oscillator .no reasons are forseen that may indicate that the applicability of the method is limited to simple , non chaotic , dynamical systems , such as the one investigated here .the rest of the paper is organised as follows . in section [ s_gc ]we make some general considerations on the effects of additive and dynamical noise . in section [ s_mnp ]we briefly discuss the numerical methods followed to solve the dynamical equations with noise and the embedding process .the procedure followed to evaluate the ( statistical ) continuity is also summarised in that section .the results are discussed in section [ s_r ] , while section [ s_conc ] is devoted to the conclusions of our work .the method of the reconstructed measure along the trajectory has the capability of revealing the presence of dn in a given time series by looking at the degree of ( statistical ) continuity of the logarithmic derivative of such a measure . to this end , it is to be considered as complementary to other techniques , essentially based on short - time predictability or on smoothness in phase - space . as far as an is concerned , in ref . it is argued that methods of the second type can be useful even in real experimental series affected by measurement components .a different line is that followed by barahona and poon , who are able to cope with rather large amounts of an , at the `` price '' of building the analysis upon a given class of volterra - type nonlinear models adjusted on the time series itself .here , we do not specify any _ a priori _ dynamics and pursue the extension of the method of ortega and louis to deal with an .the choice is motivated by very recent results which allow us to believe that this method is suitable even for high - dimensional chaotic systems , which somehow fool the above - mentioned alternative approaches . as in refs . we concentrate on continuous - time systems and start by observing that a dn - term modeled by ( the increments of ) a wiener process ( coupled through some constants ) : is basically different from white an , superimposed to the time series \} ] ) . in order to test the mathematical properties embodied in a possible mapping between two given timeseries , that is , continuity , differentiability , inverse differentiability and injectivity , pecora _ et al ._ have developed a set of statistics aimed to test quantitatively these features .their algorithms are of general use and can in particular be applied to test topological properties in any pair of sets of points .basically , the method is intended to evaluate , in terms of probability or confidence levels , whether two data sets are related by a mapping having the continuity property : a function is said to be continuous at a point if such that .the results are tested against the null hypothesis , specifically , the case in which no functional relation exists .this is done by means of the statistics proposed by pecora _ and where is the probability that all of points in the -set , around a certain point , fall in the -set around .the likelihood that this will happen must be relative to the probability , , of the most likely event under the null hypothesis . in the appendix we present some details on the calculation of the ratio in eq .( [ e8 ] ) , useful for a numerical implementation .the sum in eq .( [ e7 ] ) represent an average over points chosen at random in the whole time series .now , when we can confidently reject the null hypothesis , and assume that there exists a continuous function .as in the work of pecora _ et al . _ the scale is relative to the standard deviation of the time series , and thus ] and was always fixed to be a 20% of the total length of the series .the csc for the amplitude of the van der pol oscillator ( eqs .( [ vdpsystem ] ) ) is shown in fig .[ cscvs ] for different levels and kinds of noise . on the one handit is readily seen that the csc for the time series affected by dn is essentially independent of the noise amplitude .this appears to be consistent with the continuity analysis sketched in section [ s_gc ] . on the other hand , the csc for series affected by an decreases steadily , and almost linearly , with the noise level .we have considered also the more subtle case in which both an and dn are present .in particular , we have fixed a moderate level of dn , like , and then contaminated the resulting -time series with various levels of an .the corresponding points in fig .[ cscvs ] are almost indistinguishable from those without dn , indicating that what rules the csc is the presence of an .to give an idea of how the cs is disributed over the different resolution scales we have plotted , in fig .[ cscve ] , the statistics of the coordinate for some representative cases .roughly we could say that the effect of the various noise sources is to shift , along the axe , the same sigmoidal shape .what is different , between an and dn , is that the shift induced by the former is considerably more pronounced , actually one order of magnitude in when passing from to .let us now come to the analysis of the csldm values . in refs . it was argued that this quantity is an indicator which is sensible to the presence of dn .the results of fig .[ csldmvs ] show that it is also strongly affected by an .both the dn- and the an - data decrease almost monotonically with the noise level , with the exception of a small jump at .the origin of the latter is not completely clear , though it may be partially due to the statistical fluctuations induced by the sampling over the points ( eq .( [ e7 ] ) ) . as in fig .[ cscvs ] we present some cases in which the combined action of an and dn takes place , just to show a limitation of the present approach . the pure an and the an+dn data are not so close as in fig .[ cscvs ] but there is not a definite trend to discriminate between a non - stochastic ( ) and a stochastic underlying dynamics , when the original time series is contaminated by an . as a general property we should note that the an - free values are always greater than the ones in which an is present .for the sake of completeness a -vs- plot for the measure is given in fig .[ csldmve ] . with respect to fig .[ cscve ] the sigmoidal shape is flattened ( rather than shifted ) , and the an curves are somehow less regular than the dn ones . nonetheless , the lowering effect of noise is clearly appreciable .note also that in this case the effect of varying the an level of one order of magnitude is far less evident than in the csc .in fact this is consistent with fig .[ csldmvs ] where it is seen that the csldm jumps down and levels off to as soon as a small amount of an is introduced .in this paper we have presented the extension of the method of refs . to the analysis of time series affected by an .the basic task which one has to perform is to compare the behaviour of the statistical continuity of the coordinate and of the statistical differentiabilty of the natural measure , at different levels of noise .while the first is sensibly different from the `` clean '' one only when an is present , the second is affected by both dynamical and additive noise .hence , it is possible to discriminate between the two cases .altough we believe that the present method can be readily applied also to high - dimensional and/or chaotic systems , the results discussed here refer to the simple van der pol oscillator .a further key step in the analysis of experimental time series , namely the criterion to adopt when both types of noise are present , has been only touched here and is currently investigated by means of real physiological data .this work was supported by grants of the spanish cicyt ( grant no .pb960085 ) , the european tmr network - fractals c.n .fmrxct980183 , the universidad nacional de quilmes ( argentina ) and the universidad de alicante ( spain ) .g. ortega is a member of conicet argentina .as pointed out in the appropriate probability distribution for the continuity test is the binomial one : with , being the number of points in -set ( see text ) .in addition , due to the null hypotesis under consideration , the probility appearing in eq .( [ e8 ] ) is just . except for the trivial case , for which one has , the maximum of the distribution ( [ bin_dist ] )is located at ] denoting the integer part - see , for istance , ) .now , let us observe that in the present case we must calculate only the ratio : where .> from a numerical point of view it is advantageous to exploit the following trick in the left - hand side of eq .( [ expr_rppmax ] ) .take , so that can be simplified in the numerator and in the denominator of the fraction leaving : dealing with the product in eq .( [ trick_icb ] ) has the advantage of avoiding ratios of very large numbers .however , the powers of appearing in eq .( [ expr_rppmax ] ) can still introduce rather small numbers , which have to be treated through their logarithms .weigend and n.a .gershenfeld ( editors ) , _ time series prediction _ , santa fe institute studies in the sciences of complexity series , vol .xv ( addison wesley , reading , 1994 ) .h. kantz and t. schreiber , _ nonlinear time series analysis _( cambridge university press , cambridge , 1997 ) .h. kantz , pp .475 - 490 in .t. d. sauer and j. a. yorke , phys .83 * , 1331 ( 1999 ) .g. sugihara and r. may , nature ( london ) * 344 * , 734 ( 1990 ) .kaplan and l. glass , phys .. lett . * 68 * , 427 ( 1992 ) ; physica d * 64 * , 431 ( 1993 ) .r. wayland , d. bromley , d. pickett and a. passamante , phys .lett . * 70 * , 580 ( 1993 ) .salvino and r. cawley , phys .lett . * 73 * , 1091 ( 1994 ) .j. bhattacharya and p. p. kanjil , physica d * 132 * , 100 ( 1999 ) .g. ortega and e. louis , phys .lett . * 81 * , 4345 ( 1998 ) .g. ortega and e. louis , phys .e. * 62 * , 3419 ( 2000 ) .m. san miguel and r. toral , in _ instabilities and nonequilibrium structures _ ,vi , e. tirapegui and w. zeller eds .( kluwer academic pub . ,l. pecora , t. carroll and j. heagy , phys .e. * 52 * , 3420 ( 1995 ) . t. sauer y j. yorke , ergodic th .17 * , 941 ( 1997 ) .t. miyano , int .. chaos , * 6 * , 2031 ( 1996 ) m. barahona and c .- s .poon , nature * 381 * , 215 ( 1996 ) .g. ortega , c. degli esposti boschi and e. louis , _ `` detecting determinism in high - dimensional chaotic systems '' _ , submitted to phys .e. note that in eq .( [ ixj ] ) we have assumed that the are independent of . however , the generalisation to the -dependent case does not modify our picture since the essential point for the continuity property is the existence of a term .this turn out to be the case when the increments are calculated through the milshtein scheme .g. g. szpiro , physica d * 65 * , 289 ( 1993 ) .a. m. mood , f. a. graybill and d. c. boes , _ introduction to the theory of statistics _( mcgraw hill , inc . , 1974 ) .h. abarbanel , r. brown , j. sidorowich and l. tsimring , rev .phys . * 65 * , 1331 ( 1993 ) .m. cencini , m. falcioni , e. olbrich , h. kantz and a. vulpiani , phys .e * 62 * , 427 ( 2000 ) .j. jeong , m.s .kim and s.y .kim , phys .e * 60 * , 831 ( 1999 ) .m. ding _ et al ._ physica d * 69 * , 404 ( 1993 ) .a. galka , t. maab and g. pfister , physica d , * 121 * , 237 ( 1998 ) .m. casdagli , s. eubank , d. farmer and j. gibson , physica d * 51 * , 52 ( 1991 ) .r. hegger , m. j. bnner and h. kantz , phys .* 81(3 ) * , 558 ( 1998 ) .e. olbrich and h. kantz , phys .a * 232(1 - 2 ) * , 63 ( 1997 ) .j. theiler , s. eubank , a. longtin , b. galdrikian and j. d. farmer , physica d * 58 * , 77 ( 1992 ) .a. osborne and a. provenzale , physica d * 35 * , 357 ( 1989 ) .t. schreiber , phys .lett * 80 * , 2105 ( 1998 ) ., for the coordinate of the van der pol oscillator .results are shown for pure an ( no dn , open circles ) , pure dn ( no an , full diamonds ) and for a system having a fixed level plus an with variable standard deviation ( full squares , almost overlapped with the open circles).,width=302,height=453 ] , corresponding to a four - dimensional embedding ( the other paramters being reported in the main text ) .results are shown for pure an ( no dn , open circles ) , pure dn ( no an , full diamonds ) and for a system having a fixed level plus an an with variable standard deviation ( full squares).,width=302,height=453 ] scales , for the same levels and types of noise of fig .[ cscve ] . in all cases ,the measure is estimated from a four - dimensional embedding with the parameters indicated in the text.,width=302,height=453 ]
|
we address the distinction between dynamical and additive noise in time series analysis by making a joint evaluation of both the statistical continuity of the series and the statistical differentiability of the reconstructed measure . low levels of the latter and high levels of the former indicate the presence of dynamical noise only , while low values of the two are observed as soon as additive noise contaminates the signal . the method is presented through the example of the van der pol oscillator , but is expected to be of general validity for continuous - time systems . , and time series analysis , measurement noise , intrinsic noise , statistical continuity , van der pol oscillator . 02.30.cj , 05.45.+b , 07.05.kf
|
on a particle accelerator the longitudinal profiles of a particle bunch can not easily be measured .several indirect measurement techniques have been established relying on the measurement of the spectrum of radiation emitted by the bunch either when it crosses a different material or when it passes near a different material .this emitted spectrum encode the longitudinal profile through the relation : where is the emitted intensity as a function of the wavelength . is the intensity of the signal emitted by a single particle and is a form factor that encodes the longitudinal and transverse shape of the particle bunch .recovering the longitudinal profile requires to invert this equation however this is not straightforward as the information about the phase of the form factor can not be measured and therefore is not available .a phase reconstruction algorithm must therefore be used to recover this phase .several methods exist ( see for example ) . in this articlewe describe how we implemented two of these methods and compared their performances .when it is only possible to measure the amplitude of the complex signal , it is necessary to recover the phase of the available data .we assume that the function of the longitudinal beam density is analytical .for an analytic function this is easier because the real and imaginary part are not completely independent .the kramers - kronig relations helps restore the imaginary part of an analytic function from its real part and vice versa . to recover the phase from the amplitude , the function should be written as : with its amplitude and its phase .the kramers - kronig relations can then be applied as follows : the basis of this relationship are the cauchy - riemann conditions ( analyticity of function ) . in some casesthis phase can also be obtained simply by using the hilbert transform of the spectrum : as the hilbert transform ( ) is related to the fourier transform ( ) : the calculation of phase can use an optimised fft code and is therefore much faster than calculating the kramers - kronig s integral .we implemented in matlab these two different phase reconstruction methods .the hilbert transform method has the advantage of being directly implemented in matlab , allowing a much faster computing .to test the performance of these methods we created a small monte - carlo program that randomly simulates profiles ( ) made of the combination of 5 gaussians according to the formula where and , and are random numbers with ] , \times 10^{-9 } ] .the values of these ranges have been chosen to generate profiles that are not disconnected ( that is profiles whose intensity drops to almost zero between two peaks ) without being perfect gaussian .we checked that our conclusions are valid across this range . using this formula we generated 1000 profiles , then took the absolute value of their fouriertransform and sampled at a limited number of frequency points ( ) as would be done with a real experiment in which the number of measurement points is limited ( limited number of detectors or limited number of scanning steps ) . to estimate the performance of the reconstructionseveral estimators are available .we choose to use the , defined as follow : where is the observed value , is the expected ( simulated ) value , is the weight of the point , n is the number of points . howevertwo very similar profiles but with a slight offset , will give a worse than a profile with oscillations ( see figure [ offsine ] ) .this can be partly mitigated ( in the case of horizontal offset ) by offsetting one profile with respect to the other until the is minimized .also we decided to look at the fwhm which was generalized as fwxm where $ ] is the fraction of the maximum value at which the full width of the reconstructed profile was calculated ( with this definition the standard fwhm is noted fw0.5 m ) .we created an estimator defined as follow : where and are the fwxm of the original and reconstructed profiles respectively . here two profiles that are similar but slightly offset ( in position or amplitude ) will nevertheless return good values of this estimator despite returning a rather large . despite being relatively similar .+ ; for profile with sine noise : fw0.1m=0.0241 , fw0.2m=0.044fwhm=0.0621 fw0.8m=0.1849 fw0.9m=0.3619 . as fwxm calculated from top of profile , for all profiles, width=264 ] to ensure that the choice of the parameters and for the simulations does not bias significantly the results , their value has been varied and this is shown in figure [ sigma_chi2 ] .( top ) and ( bottom ) due to effect of scaling in terms of the and ratio . for each point1000 simulations were made.,title="fig:",width=264 ] + ( top ) and ( bottom ) due to effect of scaling in terms of the and ratio .for each point 1000 simulations were made.,title="fig:",width=264 ] + ( top ) and ( bottom ) due to effect of scaling in terms of the and ratio .for each point 1000 simulations were made.,title="fig:",width=264 ] + ( top ) and ( bottom ) due to effect of scaling in terms of the and ratio .for each point 1000 simulations were made.,title="fig:",width=264 ] + different distributions have been used for the frequencies : linear , logarithmic , triple - sine . in most sampling schemes we used 33 frequencies to make it comparable with the triple - sine distribution used in .these sampling schemes are defined as follow : * _ triple - sine _ this sampling matches that of the e-203 experiment at facet .eleven detectors are located every around the interaction point and 3 different sets of wavelengths are used , giving the following distribution : with and varying between and by steps of . * _ linear sampling _ here sampling points are distributed uniformly .the first and last points of sampling are the first ( ) and last ( ) points used in the triple - sine sampling .the following formula gives the sampling frequencies : .\ ] ] * _ logarithmic sampling_. here sampling points are distributed logarithmically : /32).\ ] ] for this sampling also the first and last points are the same as in triple - sine sampling .the study of the sampling is important , as it shows the best position of the detectors and also how to optimize the system .linearly sampled spectrum gives the best result as shown in figure [ samp ] .criterium ( top ) and ( bottom).,title="fig:",width=264 ] + criterium ( top ) and ( bottom).,title="fig:",width=264 ] however , the linear sampling may not be practical to realize in a the real world .one needs to take into account the spatial size of the detectors ( about 10 degrees in the case of e-203 ) and there is also a limit on the start and end points of detectors location ( 35 - 145 degrees for e-203 ) .so linear sampling at a wide range of frequencies is impossible with this number of points .an investigation of how many linear sampling points can be used for a given angle difference between detectors shows that such physical constraints reduce strongly the number of detectors that can be used .figure [ lin12 ] shows examples of detector positions .the position of the red points is calculated using formula [ eq : lamb ] and blue are possible detector positions which do nt break the minimum detector distance ( mdd ) given on top of each plot .( top ) and ( bottom ) .,title="fig:",width=264 ] + ( top ) and ( bottom ) .,title="fig:",width=264 ]figure [ biglin ] shows a comparison of the performances achieved with such positioning for different mdd . in each casethe triple sine sampling ( ts ) is better than the linear sampling ( ls ) and close from the maximum number of detectors with linear sampling ( lsmx ) . so the ts configuration is favored and will be used in the rest of this paper .the comparison between ts1 , ts5 and ts10 shows that reconstruction performances are limited by the mdd .criterium ( top ) and ( bottom ) .ls is linear sampling with mdd and ts is triple sine sampling ; mx mean the reconstruction use the maximum number of detectors ( blue and red dots in figure [ lin12]).,title="fig:",width=340 ] + criterium ( top ) and ( bottom ) .ls is linear sampling with mdd and ts is triple sine sampling ; mx mean the reconstruction use the maximum number of detectors ( blue and red dots in figure [ lin12]).,title="fig:",width=340 ] the choice of 33 frequencies for the sampling of the spectrum was made to match the current layout used on e-203 .however it is important to check if there is an optimum value . to perform this check we used the same simulations and the same simulated spectrum but sampled with 3 to 140 points .the effect of changing the sampling frequencies on the is shown in figure [ sampling_chi2 ] .this study uses triple sine sampling with 1000 profiles for each point and both reconstruction method .( top ) and ( bottom ) ., title="fig:",width=264 ] + ( top ) and ( bottom ) ., title="fig:",width=264 ] it can be seen at figure [ sampling_chi2 ] that beyond about 33 sampling points the gain on the reconstructed is marginal . after applying the sampling procedurethe data need to be interpolated and extrapolated to have a larger number of points in the spectrum .interpolation is done using piecewise cubic hermite interpolating polynomial ( pchip ) , as suggested in .the interpolation function must satisfy the following criteria : it must conserve the slope at the two endpoints ( to have a continuous derivative ) and respects monotonicity .pchip interpolation has been chosen as it matches these requirements .for low frequency extrapolation two methods have been investigated : gaussian or taylorian . in the gaussian method , we defined the extrapolation as follow : where is the extrapolated spectrum at low frequency and the constants a , b , and c were chosen from the following conditions : * * * the extrapolation relies in the fact that according to the central limit theorem in the time space the expected profile is gaussian - like and in the frequency space it will also be gaussian .the other extrapolation method is based on taylor expansion with the following definition : approximation to the 4th order gives the following low frequency ( lf ) extrapolation : conditions for the constants a , b and c are the same .comparison of different lf extrapolation can be found in figure [ lf ] and the performances of these methods in figure [ lf2 ] . for each method ( bottom ) .gaussian and taylorian methods are described in the text .`` real lf spectrum '' means that the real lf spectrum is used . for this simulationthe hilbert method of phase recovery and high frequency extrapolation were used.,title="fig:",width=245 ] + for each method ( bottom ) .gaussian and taylorian methods are described in the text .`` real lf spectrum '' means that the real lf spectrum is used .for this simulation the hilbert method of phase recovery and high frequency extrapolation were used.,title="fig:",width=245 ] + for each method ( top ) and ( bottom ) ., title="fig:",width=245 ] for each method ( top ) and ( bottom ) ., title="fig:",width=245 ] in the rest of this paper we used the gaussian method .several high frequency ( hf ) extrapolation methods were also tested .the most common is : where is the extrapolated spectrum at high frequency and , where is spectrum value of final point .the second method uses the same consideration as in lai and sievers : assuming that the bunch size is finite with two end points at and at then the longitudinal charge distribution ( ) must follow .an integration by parts gives : the first term vanishes because of the boundary conditions , so for large , is proportional to and two conditions have to be matched : * * where is the last sampled point of the spectrum . to satisfy the boundary condition two constantsare needed , giving a two - terms extrapolation : or extrapolation with degree of frequency as free parameter : where the a and b coefficients which are calculated from the last data samples and the boundary conditions as follow : * * the requirement of finite bunch size requires , so in the case where the fit gives we use .two other extrapolation methods also have been investigated : * for * for where is the real spectrum .these hf extrapolation methods are compared in figure [ hf ] and [ hf2 ] .thus , by virtue of the above arguments and simulations , it s naturally to choose the high - frequency extrapolation by power function .+ + ( top ) and ( bottom).,title="fig:",width=245 ] + ( top ) and ( bottom).,title="fig:",width=245 ] +after applying extrapolation and interpolation , the spectrum recovery is completed .then we used different reconstruction techniques to reconstruct the original profile . for each reconstruction methodsome profiles are very well reconstructed whereas some other are not so well reconstructed .examples of well reconstructed profiles are shown in figure [ good_profiles ] and examples of poorly reconstructed profile are shown in figure [ bad_profiles ] .+ + + + + the and distribution of the 1000 simulations which were made and then reconstructed using the hilbert transform method and kramers - kornig reconstruction are shown in figure [ profiles_stats_hilbert ] .there is a good concordance in fwhm between two methods indicating that they are both good at finding the bunch length .however , the hilbert method gives lower indicating that this method is better at reconstruction of the bunch profile .( top ) and ( bottom ) distribution of 1000 simulations reconstructed using the hilbert transform method ( black line ) and kramers - kronig reconstruction method ( red line ) ., title="fig:",width=264 ] + ( top ) and ( bottom ) distribution of 1000 simulations reconstructed using the hilbert transform method ( black line ) and kramers - kronig reconstruction method ( red line ) ., title="fig:",width=264 ] the fact that the phase recovery method based on the kramers - kronig relation gives a worst than the method based on hilbert relation has been investigated .it is caused by the presence of negative components in the tails of the profiles .figure [ expkk ] highlights this issue for one of the profiles . and explains why the obtained by this method is higher as shown in figure [ profiles_stats_hilbert].,width=377 ] figure [ fwxm ] shows the different fwxh values for different values of x. this shows that at different height of the profiles the quality of reconstruction varies : there is a better agreement in the tails ( x=10% ) than at the top of the profile ( x=90% ) .figure [ mod ] shows the modulus of the difference between the original and reconstructed profiles .one can see oscillations in the difference between the original and reconstructed profile . for 1000 profiles with both methods.,width=264 ] + while doing this work we also became aware of the discussion in where it is argued that these reconstruction method have more difficulties with lorentzian profiles than gaussian profiles .therefore we simulated 1000 lorenzian profiles and performed a similar study .this is shown in figure [ lorenz ] .although the is slightly worse in that case than in the case of gaussian profiles there still a good agreement between the original and reconstructed profiles . in the case of a lorenzian distribution ., title="fig:",width=264 ] + in the case of a lorenzian distribution ., title="fig:",width=264 ] + in our discussion so far we considered only the ideal case where no noise is added to the measured spectrum .however in a real experiment a noise component has to be added to the measured spectrum .this noise was added as follow : \ ] ] where is the observed value , is the observed value with noise , is a random number between 0 and 1 ( all numbers between 0 and 1 being equiprobable ) , and is the maximum noise for that simulation ( depending on the case this can be 5% , 10% , 20% , 30% , 40% or 50% ) .this study was done using linear sampling with 33 samples and 1000 simulated profiles for each noise value .figure [ noise ] shows how the is modified when this noise component is added . and as function of noise amplitude.,title="fig:",width=264 ] + and as function of noise amplitude.,title="fig:",width=264 ] +we performed extensive simulation to estimate the performance of two phase recovery methods in the case of multi - gaussian and lorenzian profiles . in both cases we found that when the sampling frequencies are chosen correctly we obtained a good agreement between the original and reconstructed profiles ( in most cases ; ) .this confirms that such methods are suitable to reconstruct the longitudinal profiles measured at particle accelerators using radiative methods .the authors are grateful for the funding received from the french anr ( contract anr-12-js05 - 0003 - 01 ) , the pics ( cnrs ) `` development of the instrumentation for accelerator experiments , beam monitoring and other applications '' and research grant # f58/380 - 2013 ( project f58/04 ) from the state fund for fundamental researches of ukraine in the frame of the state key laboratory of high energy physics and the ideate international associated laboratory ( lia ) .
|
measurements of coherent radiation at accelerators typically give the absolute value of the beam profile fourier transform but not its phase . phase reconstruction techniques such as hilbert transform or kramers kronig reconstruction are used to recover such phase . we report a study of the performances of these methods and how to optimize the reconstructed profiles .
|
over the past two decades , color centers in diamond have emerged as promising systems for quantum information ( qi ) applications and precision sensing .they were the first and are still among the brightest solid - state , room - temperature single photon sources .moreover , several defects allow for optical access to associated electron and nuclear spin states , which can exhibit long coherence times , enabling their use as quantum memories in qi applications .these defects also exhibit strong sensitivity to magnetic field , electric field , strain , pressure , and temperature dependence , enabling sensing of small fields at low frequencies , often at room temperature , and down to the single nuclear spin level . by coupling these already promising defect centers to optical nano- and microstructures , one can shape and control the optical properties to increase the performance , efficiency , and fidelity of sensing and qi protocols .recent demonstrations of a variety of diamond patterning techniques focused ion beam ( fib ) milling , reactive ion etching ( rie ) , quasi - isotropic etching , and electron beam induced etching ( ebie ) have enabled patterning of diamond at the nanoscale , and thus the field of diamond nanophotonics . with high quality diamond fabrication now a reality ,many optical systems have been proposed and realized in diamond and hybrid diamond systems . here , we will discuss photonic structures for increased collection efficiency , stand - alone defect - cavity systems for tailored light - matter interaction , and hybrid photonic architectures for photon collection , routing , interaction , and detection .we will particularly focus on diamond _ photonic _ structures coupled to single quantum systems , and not discuss diamond plasmonics , non - linear photonics in diamond resonators , raman - lasers , optomechanical systems , and hybrid systems with diamond nanocrystals .while there have been significant advances with nanodiamonds in hybrid photonic systems , and nanodiamond fabrication and properties have advanced , in this paper we will focus on diamond nano- and microstructures patterned into polycrystalline and single - crystal diamond , which have been shown to have superior optical and spin properties compared to most diamond nanocrystals .diamond s exceptional combination of a wide electronic bandgap , high mechanical strength , high thermal conductivity , and large hole mobility has made it an attractive material for high frequency , power , temperature , and voltage applications such as power electronics . furthermore , diamond is chemically inert and biocompatible making it a promising material for biological applications , especially in nanocrystalline form . in the field of quantum optics , diamond is uniquely attractive due to its wide bandgap , ev , which allows it to host to more than 500 optically active defects , known as ` color centers ' .these crystalline defects , corresponding to some combination of displacement or substitution of the native carbon atoms within the diamond lattice , create spatially localized , energetically separated ground and excited states within the electronic band gap of the bulk crystal .the high debye temperature of diamond ( ) leads to a relatively low phonon population at room temperature , which allows these defect - related electronic states to persist for long times without suffering from phonon - induced relaxation .finally , diamond crystals are relatively free of nuclear spins , with a natural composition of of that can be increased to in isotopically - enhanced chemical vapor deposition ( cvd ) growth .this lack of background spins leads to low magnetic field fluctuations , in principal facilitating long coherence times for the few electronic or nuclear spins present .these properties make diamond an ideal host material for single quantum defects and photonic elements . of the over 500 optically active defect centers in diamond ,more than 10 have been demonstrated to exist as single quantum emitters . as shown in fig .[ fig : colorcenters1 ] , these centers span a wide range of single - photon emission wavelengths across the visible spectrum ranging from the blue into the near - infrared . of these single - photon sources , three have been observed to exhibit optically detected magnetic resonance ( odmr ) , in which changes in photon emission intensity are observed while driving the spin on and off resonance with a tunable microwave field , first demonstrated for the negatively charged nitrogen vacancy center ( nv ) .this is a convenient mechanism for direct spin state readout via photoluminescence .spin - state - dependent optical transitions , in general , enable fast initialization , manipulation and measurement of spin states using laser excitation .many quantum information processing ( qip ) schemes use a link between stationary solid - state quantum memory bits and flying photonic qubits as a basic resource , making diamond spin systems an attractive candidate for qip .the most prominent among the odmr - active diamond color centers is the nv , which exhibits stable room temperature single photon emission and particularly long electron and nuclear spin coherence times compared to other solid state defect centers . in the field of quantum optics ,the nv has notably been applied in experiments demonstrating spin - photon entanglement , distant spin entanglement , quantum teleportation , and finally the first loophole free demonstration of bell s inequality .recently , entangled absorption was demonstrated mediated by an inherent spin - orbit entanglement in a single nv .also the coherent transfer of a photon to a single solid - state nuclear spin qubit with an average fidelity of 98% and storage times over 10 seconds demonstrated .in addition to the nv , the negatively charged silicon vacancy defect center ( siv ) has recently gained attention as an optically accessible single spin system .notably , the siv in pure , strain free crystals possesses optical transitions that are naturally nearly fourier - transform - limited and insensitive to environmental noise .this has enabled the demonstration of high - fidelity hong - ou - mandel interference between photons emitted by two siv centers .unfortunately , the spin coherence times are currently limited by phonon interactions to several 10s of nanoseconds , which is many orders of magnitude lower than that of the nv .the realization of diamond - based photonic devices requires that optical design parameters are accurately transferred into diamond by either an additive or subtractive approach .we will focus on the subtractive approach , as it is more common for well designed photonic structures .various fabrication methods have been used to demonstrate photonic devices in diamond . focused ion beam ( fib ) milling defines and transfers the photonic pattern directly into diamond .direct electron beam lithography ( ebl ) writing defines a pattern with nanometer precision in an electron beam resist layer and is usually combined with a subsequent dry etching step to transfer the photonic pattern into the diamond .transferrable silicon mask lithography exploits the relatively mature fabrication process on silicon - on - insulator ( soi ) samples , and provides high etch selectivity for the subsequent oxygen etching step . due to the lack of commercially available diamond thin films of optical thickness , 3-dimensional ( 3d ) monolithic patterning techniqueshave also been developed such as angular fib and rie etching and isotropic etching techniques .all successful photonic devices must begin with high - purity single crystal diamond .while initial defect studies were done on natural diamond , current nano - photonic structures are made from synthetic diamond .chemical vapor deposition ( cvd ) and high - pressure high - temperature ( hpht ) growth allow for the production of low strain single - crystal diamond with controllable defect concentrations .these high - quality single - crystal diamonds are limited in size to a few tens of mm , while polycrystalline films and bulk diamonds can be grown on significantly larger scales , though with less control over lattice strain and defect inclusion across the sample .growth process parameters strongly influence the quality of the crystal and the concentration and type of lattice defects , making diamond growth still technically challenging , though high quality synthetic diamonds are available commercially .ultra - pure diamond , categorized as type iia for low nitrogen and boron content , is often used as a starting point for quantum optics applications because of higher purity .in fact , defects in ultra - pure diamond can be eliminated down to levels ppb .however , desired defects can also be introduced in the diamond growth up to high levels ppm , allowing for control of the native defect densities across several orders of magnitude .this approach can avoid crystalline damage associated with defect creation through implantation , and has been shown to provide long spin - coherence times , high defect density or spatially - selected defect layers as will be discussed in section 4c .further insights into the synthesis of single crystal diamond by hpht or cvd methods , either by homoepitaxial growth or heteroepitaxial deposition on large - area single crystals of a foreign material are discussed in ref . .many photonic systems that confine light on the order of the wavelength are based on thin film substrates which are either suspended or supported by a lower index of refraction material to achieve total internal reflection . for single mode devices operating resonantly with color centers in the visible range, such a thin film has to be on the order of 200 nm in thickness .in contrast to many other semi - conductor materials , single crystal diamond growth is challenging and has been limited mainly to diamond - on - diamond techniques , precluding the use of an underlying sacrificial layer or lower index material .one very promising exception is the heteroepitaxial growth via bias - enhanced nucleation on iridium / yttria - stabilized zirconia buffer layers on silicon ( layer structure : diamond / ir / ysz / si(001 ) ) . due to an unmatched degree of initial alignment and extraordinary high density of such epitaxial diamond grains on the iridium layer, they can lose their polycrystalline character during subsequent textured growth within a few to tens of micrometers .a several hundred nanometer thick suspended diamond membrane can be fabricated by first removing the silicon substrate and buffer layers and then dry - etching the polycrystalline backside of the diamond . for the fabrication of diamond thin films from bulk diamond substrates ,different methods have been developed .fib milling has been pursued for the separation of small diamond slabs from the bulk .however , the highly physical nature of the ion bombardment causes crystal damage as evidenced by raman spectroscopy , photoluminescence , and transmission electron microscopy .reactive ion etching ( rie ) of slabs was seen to cause less crystal damage in the final product , allowing for spin coherence times approaching 100 . however , this rie method only allowed for the production of small ( ) membranes , which limits the ability to post - process and fabricate more complex photonic structures .there has also been work towards the separation of a diamond film from the bulk via the controlled creation of a damage layer .mev ions accelerated at the diamond crystal will stop at an energy - dependent depth .the damage caused by collisions with the lattice will create a well - localized graphite layer that can be removed via a wet - etch step .crystal damage is inevitably induced in the removed membrane .however , this can be mitigated with a strategic etch of the damaged side , and subsequent diamond overgrowth , allowing for high quality spin properties of defect centers .fib milling of diamond , in which carbon atoms are mechanically removed from the lattice with accelerated ga ions , or o ions is a masks process which can be used for fabrication of diamond photonic devices .the spatial resolution is mainly limited by the ion beam width .this gives several advantages for diamond patterning : ( i ) a mask is not required , eliminating the need for special handling or resist spinning , and ( ii ) optical isolation from the bulk is achieved simply by tilting the stage to etch at an angle relative to the beam and to undercut the structure .fib milling has been used to demonstrate nanobeam cavities and free - standing , undercut bridge structures in bulk diamond , and two - dimensional ( 2d ) photonic cavities in a single crystal diamond layer on a buffered si substrate . however , this technique is limited by the relatively long milling time and inclined side walls leading to limited cavity quality - factors , and the residual damage to the diamond material , which results in reduced color center properties as well as additional optical and spin background . the material damage from ion millingcan be partially removed either by acid treatment and oxidation step or using electron beam induced local etching .this minimizes the optical losses and fluorescence background from the ion contamination .ebl is widely used for defining patterns with nanometer feature size , and is applied in combination with an etching method , most often rie or inductively coupled plasma ( icp ) rie .ebl typically requires a conductive substrate that is several millimeters in size and an e - beam resist having sufficient etch selectivity with respect to the substrate for the subsequent pattern transfer .these requirements are challenging to satisfy for small , insulating diamond samples . coating diamond with a conductive layeris widely used to minimize charging during ebl writing .hydrogen silsesquioxane ( hsq ) resist is a high resolution e - beam resist that can be used to pattern diamond .its modest intrinsic selectivity to standard diamond rie etch recipes can be enhanced by post - development electron curing .the etch selectivity can be further enhanced with other mask layers patterned via lift - off , or an initial short dry or wet etch step .such recipes have been used to demonstrate diamond nanowires , suspended waveguide and nanobeam cavities , diamond plasmonic apertures , and gratings . to avoid spin - coating small diamond samples and exposing them to electron , ion , or uv radiation ,a novel nanofabrication technique was developed as illustrated in fig .[ fig : sipatterning ] . instead of defining a mask directly on the diamond substrate ,a frameless single - crystal silicon membrane mask is pre - patterned from ( soi ) samples and placed onto the diamond substrate silicon using membrane - transfer techniques .this enables pattern transfer with feature sizes down to 10 nm , etch selectivity of over 38 for the subsequent oxygen rie , and automatic positioning of ion implantation apertures with respect to the photonic structure , as will be discussed in sec .[ artificialcreation ] with alignment accuracy guaranteed by the ebl writing .for patterning of the silicon masks , high resolution ebl is applied , in combination with well developed rie . by applying one of two complimentary transfer techniques, one can place both small and large masks on diamond substrates with sizes up to 200 x 200 m , and 1 x 1 mm , as required by the sample size to be patterned . for small masks ,a pick - and - place method is applied based on nanomanipulation of a micro - polydimethylsiloxane ( pdms ) adhesive attached to a tungsten probe tip . for large masks ,a stamping approach is used with a transparent polytetrafluoroethylene ( ptfe ) sheet .after the pattern is transferred to diamond by oxygen etching , the si membrane masks are mechanically removed , avoiding solvent - based mask removal procedures on diamond substrates . the si mask patterning was applied for various photonic nanostructures in both bulk samples and diamond thin films with sizes down to hundreds of square microns . to pattern diamond membranes with nm thickness ,the diamond membranes are adhered to a si substrate , the patterned si mask is put on the diamond membrane using the pick - and - place technique , photonic patterns are transferred into the diamond membrane by oxygen rie , the si mask is mechanically removed , and finally isotropic sf plasma is applied to undercut the etched diamond photonic structure for optical measurements . as discussed above , the fabrication of large uniform thin film diamond samples has not yet been developed .this limits the fabrication of diamond devices that guide and capture light when 2d or 3d confinement is required . to achieve the needed optical isolation for photonic structures in bulk diamond , and to circumvent the need for large thin film diamond samples for patterning, alternative fabrication approaches have been demonstrated that enable the monolithic patterning of waveguide and cavity structures into bulk diamond .one design concept is based on suspended devices with triangular cross - section .such designs can be realized by angled etching , either by rotating the sample and milling by fib , or by guiding the trajectory of ions with a faraday cage in an rie process .angular etching was used to demonstrate race - track patterns with ultra high quality factor ( q factor ) and one - dimensional ( 1d ) nanobeam cavities .another technique to produce free - standing structures is a quasi - isotropic oxygen undercut .this technique is based on the combination of standard vertical rie and zero forward bias oxygen plasma etching at an elevated sample temperature .this zero bias etching takes advantage of the low directionality of the oxygen ions and the thermally activated diamond surface leading to a quasi - isotropic chemical etching effect .this technique has enabled high - q cavities in a nanofabricated photonic disk , and high mechanical quality factor waveguides .color centers can be found in natural or as - grown synthetic diamond .however , for high quality samples with very low defect concentrations ( e.g. n<1ppm ) the concentration of color centers is too low for many of the intended applications , and the distribution is random .controlled creation of defect centers is important for the fabrication of photonic constituents in a scalable way and for the extension beyond present proof - of - principle implementations .one can differentiate between methods that rely on ( i ) activation of incorporated defects in as - grown diamond ,( ii ) controlled incorporation of defects during growth , and ( iii ) controlled implantation of defects after diamond growth .combinations of these methods have also been demonstrated .for example , activation methods can be combined with the controlled or targeted placement methods .a very powerful tool is the spatially deterministic creation via focused ion beam or masked implantation .these methods enable high yield creation of nanostructures with incorporated defect centers .the synthetic creation of defect centers depends on a wide range of parameters ; annealing temperature , vacancy density , and local charge environment have all been shown to affect nv creation . while many works address nv formation as a function of these parameters , the detailed mechanism of color center creation is still not definitively understood . until recently , it was commonly proposed that diffusing vacancies are trapped by substitutional atoms ( e.g. , nitrogen ) to create a color center ( e.g. , the nv ) .therefore , the established recipes for creating defect centers rely on annealing above 600 , at which temperature vacancies become mobile .this mechanism has been questioned by advanced density functional theory ( dft ) calculations that were applied to determine the formation and excitation energies , the charge transition levels , and the diffusion activation energies for nitrogen- and vacancy - related defects in diamond .these calculations concluded that irradiation of diamond is more likely to directly create nv defects , and not isolated vacancies .direct nv creation has been shown without thermal annealing by irradiation of diamond that has been implanted with nitrogen ions with low - energy electrons ( kev ) and beams of swift heavy ions ( gev , mev / u ) . however , this model of direct nv creation is contradicted by other works .for example , experimental results still show evidence of high vacancy mobility and indicate formation of nvs after implantation during annealing .further fundamental investigation of defect center creation is required to understand this process in full detail .annealing temperatures are a crucial tool to control the concentration of different types of lattice and crystal defects .while it is in principle sufficient to anneal samples just above above 600 , temperatures around 850 were chosen for most demonstrations over the past years .recently , temperatures up to 1200 are being applied to reduce strain and lattice defects , leading to increased spin - coherence times of nv . for the siv this leads to a narrowing of the inhomogeneous distribution from nm ( after 800 anneal ) to 0.03 nm ( 15 ghz , after 1100 anneal ) , and results in nearly lifetime - limited optical linewidths .furthermore , above 1100 the concentration of di - vacancies is reduced as their bonds are broken .di - vacancies are suspected to influence the photostability of nv centers and spectral diffusion properties of the nv zpl . for n implantation doses of /cm , energies of 85 kev and annealing up to 1200, stable optical transitions of the nv zpl with linewidth down to 27 mhz have been demonstrated , which is close to the lifetime - limited emission linewidth of about mhz .charge - state stability has been shown to be directly effected by surface termination , with fluorination leading to a higher concentration of stable nv centers .it is commonly assumed that surface treatment also affects the nv zpl stability .however this has not been systematically studied in the literature at the time of this review .defect centers can be formed by the creation of additional vacancies in doped diamond by irradiation with energetic neutrons , electrons , or ions in combination with a subsequent annealing step above 600 . in this process , ions already present in the diamond lattice can be combined with the newly created vacancies .early work used electron and ga beams to irradiate n - rich type - ib diamond to create vacancies and indirectly defect centers , in particular nv centers , from already incorporated n ions . for an unpatterned diamond surface , a spatial lateral resolution below 180nm was achieved .controlling the creation depth relative to the surface is challenging , as lattice defects are created along the path of the particle in the lattice , and the scattering cross - section varies for every species .scanning focused he - ion irradiation and subsequent annealing was also applied for the creation of nv centers . while these works achieve spatially localized nv creation , large areas , in particular entire samples , can also be irradiated to create large ensembles .such large area irradiation was , for example , used to create a millimeter - scale diamond sample with about 16 ppm ( corresponding to ) nvs .such samples with large ensembles of spins enable magnetic - field measurements with sensitivities down to in the low - frequency regime around 1 hz . for effective sensor volumes of mm and ensembles of nv ,photon - shot - noise - limited magnetic - field sensitivity was demonstrated with a sensitivity of 0.9 pt for ac signals of khz .an alternative way of controlling the depth of defect centers relative to the diamond surface is delta - doping .this has been experimentally demonstrated for the nv and the siv . for the nv ,a nanometer - thick nitrogen - doped layer is created by the controlled introduction of n gas during plasma enhanced chemical vapor deposition ( pecvd ) diamond growth .similarly , siv are created by controlling the si concentration during the growth , giving control of the concentration over two orders of magnitude .subsequent electron irradiation and annealing leads to formation of nv centers in a thin layer .this final nv creation process causes less crystal damage than direct ion implantation methods , and is therefore advantageous for both long spin - coherence times and stable and narrow spectral linewidth .delta - doped diamond thin - films have been applied to couple ensembles of nv to a nanobeam photonic crystal cavity , demonstrating that this technique could be interesting for single - nv cavity coupled systems . however , in this work confinement was demonstrated in 1d only , but could be combined with fib or ebeam activation to achieve 3d confinement . for high - resolution sensing in fluids ,delta - doping enabled engineered diamond probes with diameter and height ranging from 100 nm to 700 nm and 500 nm to 2 m , respectively . besides incorporating atomic defects into the lattice, the concept of delta doping can also be applied to engineer specific nuclear spin environments , e.g. , nanometer - thick layers of in ultra - pure natural abundance diamond by switching between purified and ( 99.99% ) source gases during diamond pecvd growth .this is promising for the creation of a controlled number or distribution of nuclear spin memories , which could be used for spin - spin entanglement , quantum error correction protocols or for quantum simulators .the most common method for the creation of color centers is direct ion implantation of a color center constituent for example , n , si , cr , or other ions into the diamond crystal . subsequent annealing creates defect centers .this method enables control of the center depth with respect to the surface , as can be determined via srim simulations .the first demonstration of this method used n ions with mev energy , corresponding to an implantation depth of about 1.15 m .shallow nv were created with 7 kev ion energy , corresponding to an implantation depth of about 10 nm , and the typical 2-fold hyperfine splitting was demonstrated , in contrast to a 3-fold hyperfine splitting for natural defects .single photon emission was demonstrated from single siv centers created via ion implantation of ions . to further understand the creation process of color centers via ion implantation , the creation efficiency of nvs as a function of ion energy was experimentally determined and compared to theoretical models , leading to 25% nv creation yield per implanted nitrogen ion .all these examples have not demonstrated control of the lateral position of created color centers .the relative alignment of color centers to each other is important for the deterministic arrangement in an array , in particular if these centers are used as a grid of sensors or as a network of entangled spins .depending on the application , lattice constants as low as tens of nanometers are required with precise positioning at each lattice site .one way to achieve this goal is the targeted implantation through 30 nm apertures in the tip of an atomic force microscope ( afm ) .this afm method was combined with stimulated emission depletion microscopy to demonstrate nanometer - scale mapping of randomly distributed nv within a less than 100 nm diameter spot .a different method for the precise relative alignment of color centers is the implantation through large - scale lithographically defined apertures , for example , ebl written apertures in beam resist or ebl patterned si - masks . in the latter experiment , ensembles of individually resolvablenv were created with nanometer - scale apertures in ultrahigh - aspect ratio implantation masks .these masks were fabricated by narrowing down apertures via atomic - layer - deposition ( ald ) of alumina , enabling a gaussian fwhm spatial distribution of about 26.3 nm , thus , reaching the lateral implantation straggle limit , see fig .[ fig : targetcreation ] .irradiation is not limited to as - grown diamond but can also be used to increase the creation yield of delta - doped samples .for example , implantation of a delta - doped sample post - growth , creates additional lattice defects , and individual nv can be localized within a volume of ( 180 nm) in an unpatterned diamond at a predetermined position defined by an implantation aperture .alternatively , by combining delta - doping for vertical confinement , and electron irradiation in a transmission electron microscope ( tem ) for lateral confinement , nv were created in a volume of less than 4 nm x 1 .a very versatile tool for the creation of color center arrays is focused ion beam implantation .it is maskless and enables the implantation of almost arbitrary patterns .similar to the electron beam in an scanning electron microscope , a focused ion beam can be applied to control the position and concentration of ions .this method has been used to implant n ions within a spot size of approximately 100 nm .similarly , arrays of silicon - vacancy centers were created by low - energy focused ion beam implantation .these are promising methods towards the targeted coupling of single defect centers to nanostructures .the deterministic coupling of a single or few color centers to a photonic nanostructure , in particular a high - q cavity , is one of the most important prerequisites for the upscaling of qi architectures .such deterministic coupling was recently demonstrated by fabrication of a photonic crystal cavity around a pre - characterized siv by fib milling enabling a resonant purcell enhancement of the zero - phonon transition by a factor of 19 , mainly limited by the positioning accuracy . to achieve a higher positioning accuracy , targeted implantation of ions into a photonic crystal cavitywas realized with the afm method discussed earlier , and purcell enhancement of single nv centers was demonstrated . a more scalable fabrication method for cavity - defect center systemsis based on an implantation mask with small apertures of nm in diameter for targeting a large number of cavity mode maxima with a wide beam . by combiningthe nanocavity etch mask with an implantation mask into a single physical mask , rie etching ( see sec . [sec : diamondpatterning ] ) and implantation can be carried out subsequently without the need of challenging re - alignment processes for two - mask processes . with this method , intensity enhancement of a factor up to 20was demonstrated ( fig .[ fig : targetcreationl3 ] ) .+ to achieve optimal light - matter coupling in cavities and other photonic systems , the defect centers must not only have precise spatial positing with respect to the optical field , but their emission dipoles must also be correctly aligned .each color center has an individual atom - vacancy composition and geometry , and therefore a certain emission dipole orientation .for instance , the nv center has four possible orientations with respect to the crystal lattice . naturally occurring nv populations have a random distribution of orientations . in the last few years, research has been done to control the orientation of nv centers during cvd growth and thus increase the device yield .initial studies showed that diamond grown with homoepitaxial cvd growth on [ 110 ] and [ 100 ] oriented substrates mainly supports two nv orientations when the growth parameters are controlled precisely .further work showed that for nv centers created during cvd of diamond on [ 111 ] surfaces , microwave plasma - assisted cvd yields 94% of nv centers along a single crystallographic direction .a different study showed that 97% perfect alignment can be obtained by controlling the cvd growth parameters precisely .this research is promising , and if the exact mechanisms can be understood and these samples can be made routinely with high yield , this technology can help increase quantum sensing sensitivity as well as interaction with fabricated photonic structures .in this section , we will focus on micro- and nanophotonic structures to increase the collection efficiency of photons emitted by defect centers .a higher collection efficiency leads to improved entanglement rates for both emission - based and absorption - based quantum communication applications and also to higher read - out fidelities for quantum sensing applications . without any modification of the diamond surface , the high refractive index ( ) of diamond results in a relatively small angle ( ) of total internal reflection at diamond - air interfaces , allowing only a small part of the overall emission to exit the diamond .even for very shallow nv ( several 10s of nm in depth ) relevant for sensing applications , emission into the air is unfavorable due to the directed emission of a dipole into the higher refractive index material . to overcome this limitation , a variety of photonic structureshave been implemented at the diamond - air interface .a selection of devices and methods is discussed in more detail in this section .one approach for overcoming limited collection efficiencies at diamond - air interfaces are cylinder- or cone - like structures etched into diamond . depending on their shape and aspect ratio , they are referred to as diamond microcylinders , nanowires , nanopillars , nanobeams , or nanowaveguides . conceptually , these structures are micro- or nanometer size single - mode waveguides : the defect couples directly to a single waveguide ( wg ) mode , while emission into other modes is suppressed .this enables efficient coupling to that specific wg mode . for a relatively narrow emission line of a few nanometers , e.g. the zpl of an nv or siv, the coupling efficiency can be up to 86% . from the wg modethe light is then either launched into free - space , bulk diamond , or another guiding photonic structure , enabling high overall collection efficiencies .such structures can be used as standalone devices as discussed in this section or can be integrated in hybrid photonic circuits and fiber architectures ( sec .[ sec : hybrid ] ) .the first demonstration of microcylinders with the high aspect ratio of 8 ( 25 for exceptional cases ) did not yet consider photonic applications but demonstrated smooth and high - rate reactive ion etching of diamond .numerical modelling was later used to study the coupling of an nv to the optical modes of a nanowire , and to determine the optimal nanowire parameters for large photon collection efficiency . for nanowires with diameters of 180 nm to 230 nm and for s - polarized dipoles with nanometer emission linewidth, more than 80% of emitted photons can couple to the nanowire mode .such nanowires were realized in the same work in both bulk single crystal and polycrystalline diamond and were applied to demonstrate high photon collection efficiencies of an nv with a detected photon flux of about kcts / s , ten times greater than for bulk diamond while using ten times less laser excitation power ( under the same excitation conditions ( objective lens with a numerical aperture of 0.95 ) .an improvement to about kcts / s photon flux in saturation was achieved with nv located about m away from the nanowire end by combining ion implantation and top - down diamond nanofabrication .these experiments were realized on [ 100]-oriented diamond , where the nv dipole is inclined to the nanowire axis , limiting the nv dipole coupling to the nanowire mode , hence limiting the overall photon collection efficiency .this limitation was overcome by fabricating such nano - structures on [ 111]-oriented diamond for which the electric dipole can be in - plane , increasing saturated fluorescence count rates to over counts per second .spin coherence measurements of nv in the sample before and after fabrication demonstrated the quality of their nano - fabrication procedure , with average spin coherence times remaining unaffected at . a further study in controlling the shape of nano pillars and its corresponding guiding propertieswas realized with ebl and inductively coupled plasma rie with a two resin technology and the usage of a titanium metal mask . by integrating a nanopillar into a diamond afm cantilever and additionally positioning a single nv at the tip of the nanopillar close to the diamond surface , a robust scanning sensor for nanoscale imagingwas realized , demonstrating imaging of magnetic domains with widths of 25 nm , and magnetic field sensitivities down to 56 nt / hz at a frequency of 33 khz . for high - resolution sensing in fluid , cylindrical diamonds particles with diameter ( height ) ranging from 100 nm to 700 nm ( 500 nm to m ) ,were fabricated with shallow - doped nv centers .the defects in these nanostructures retained spin coherence times > 700 , enabling an experimental dc magnetic field sensitivity of 9 in fluid .nanobeam waveguides with a triangular cross - section of 300 nm width and 20 m length were fabricated as free - standing structures in bulk diamond with an angled reactive ion etching , and were placed on a cover slip for oil immersion spectroscopy . by adding 50-nm deep notches every 2 m along the beam , guided light is scattered to be collected with high numerical aperture ( na=1.49 ) collection optics , leading to saturation photon count rates of about 0.95 mcts / s .these structures were used to demonstrate efficient spin readout of nv centers based on conversion of the electronic spin state of the nv to a charge - state distribution , followed by single - shot readout of the charge state .an asymmetric waveguide design was demonstrated for even higher collection efficiencies for monolithic bulk structures and close to the surface sensing applications .these pillar - shaped nanowaveguides have a top diameter of 400 nm and a bottom diameter of up to 900 nm .this variation modifies the effective refractive index along the pillar as well as the propagation constant for each mode , enabling saturation photon count rates of up to mcts / s .the temperature dependency of the t1 relaxation time of a single shallow nv electronic spin was determined with this structure .in contrast to collecting the defect emission via coupling to a waveguide mode , solid immersion lenses ( sils ) enable efficient outcoupling from bulk diamond by providing perpendicular angles of incidence at the diamond - air interface . in the simplest implementation , the emission pattern from a color center in diamond is not altered , but a higher fraction of light is emitted into the free - space .although so - called weierstrass , and elliptical designs promise higher collection efficiencies compared to the standard hemispheric shape , in diamond only hemispheric designs have been realized .one can differentiate between microscopic ( a few to a few ten m in size ) and macroscopic sils .microlens arrays were first realized by fabrication of natural diamond by a combination of photoresist reflow and plasma etching with lens diameters ranging from 10 to 100 m .concave and convex microlenses with diameters ranging from 10 to 100 m were fabricated with hot - embossing and photoresist re - flow , followed by icp etching were applied .microscopic lenses offer the advantage of micro - integration and straight - forward fabrication via fib enabling fabrication around a pre - characterized defect centers .however , their extraction efficiencies are more sensitive to surface roughness and non - ideal shapes than macroscopic lenses . still, a roughly 10-fold enhancement of the photon detection rate was achieved with 5 m sils .recently , by further optimizing fib fabrication and alignment parameters , position accuracies of better than 100 nm ( lateral ) and 500 nm ( axial ) were demonstrated , leading to saturation count rates of about 1 mcts / s for a single nv center oriented perpendicular to the [ 111 ] cut diamond surface .the deterministic alignment relative to color centers , high collection efficiencies , and relative ease of fabrication have made sils a valuable tool for qi protocols where efficient photon collection is of crucial importance , for example , quantum interference experiments for the heralded entanglement of distant nv qubits and unconditional quantum teleportation between them .sils were also demonstrated for the siv center , enabling higher photon collection efficiencies for fundamental investigations of the electronic structure of the siv and for the demonstration of multiple spectrally identical siv with spectral overlap of up to 91% and nearly transform - limited excitation linewidths .a macroscopic 1 mm in diameter diamond sil with surface flatness better than 10 nm ( rms ) was fabricated with combination of laser and mechanical processing stages , leading to a saturation count rate of 493 kcts / s from a single nv .macroscopic sils from other materials with high refractive index such as gallium phosphite ( gap , ) can also be applied to bulk or thin film diamond if the surfaces are smooth and flat enough to prevent airgaps of more than a few 10 nanometers . for thin film diamond ,this leads to a more efficient collection due to the asymmetric refractive index profile around the color center ( one side gap , on the other side air ) , providing a broadband antenna mechanism for color centers . for a single nv ,saturation count rates of 633 kcts / s were demonstrated . to further increase the collection efficiency and overall single photon count rates from stand - alone photonic devices , a circular ` bullseye ' grating structure fabricated in a diamond membranewas placed directly on a glass coverslip , as indicated in fig .[ fig : bullseye1 ] .the periodic grating structure leads to constructive interference of the membrane - guided emission into the out - of - plane direction .finite difference time domain ( fdtd ) simulations indicate ( fig .[ fig : bullseye1]c ) that up to 70% of the zpl emission of a horizontally oriented dipole emitter is guided into the glass coverslip , aided in part by the higher refractive index contrast of the diamond - air interface .fabrication of these devices is carried out with the methods discussed in section [ sec : diamondpatterning ] .the bullseye gratings were analyzed in a home - built confocal microscope setup ( na , nikon plan fluor ) , and two methods are applied to determine the upper and lower bounds of the saturated single photon detection rates . as upper ( lower )bound , a single photon collection rate of about 4.56 mcps ( 2.70 mcps ) at saturation was determined .the saturation curves are plotted in fig .[ fig : bullseye1]b . moreover , the high quality fabrication preserves the spin properties of the included nv centers , with measured electron spin coherence times of 1.7.1s .optical resonators enable control of the spectral emission properties of optical emitters and enhancing light - matter interaction of single spin systems is enabled by optical resonators . applying the concepts of optical resonators to diamond photonicsallows the tailoring of the light emission properties of defect centers , enhancing their fluorescence emission rates , and establishing efficient spin - photon interfaces , particularly important to correlate single spin states with single quantum states of light .there is also a proposal to improve the efficiency and fidelity of the ground state spin of an nv spin using cavity - enhanced reflection measurements .a large variety of resonator types ranging from micro- to nanoscopic designs have been introduced , such as whispering gallery resonators , microscopic open cavity designs , and photonic crystal cavities .a conceptual overview of different cavity designs can be found in a recent review article .the relevant cavity parameters are the cavity quality factor and the cavity mode volume which directly influence the dipole - cavity interaction , e.g. the spontaneous emission rate enhancement is proportional to . for a large we will focus on photonic crystal ( phc ) nanocavities , as they enable small mode volumes , , and large quality factors .an emitter - cavity system can be described by the jaynes - cummings model in the markov approximation , and the purcell factor quantifies the emitters spontaneous emission ( se ) suppression or enhancement . in the strong purcell regime ,in which the emitter is coupled mainly to one optical mode , the se can be significantly enhanced and the overall purcell enhancement exceeds one ( ) . when the nv zpl is coupled to a cavitythe se rate is enhanced according to : where is the maximum spectrally - resolved se rate enhancement and quantifies the angular and spatial overlap between the dipole moment ( ) and the cavity mode electric field ( ) .in contrast to atoms , quantum dots , and defect centers with narrow emission lines on the order of the cavity linewidth , the nv emission has two major contributions , the narrow zero phonon line ( zpl ) emission around 637 nm and a broad phonon sideband emission with a full - width at half maximum ( fwhm ) of about 100 nm . the ratio between the two emission bands is described by the debye - waller factor which is only about 3 % for the nv , hence only a few percent of the overall photoluminescence ( pl ) are emitted into the zpl .therefore , one has to differentiate between the overall purcell enhancement and the spectrally - resolved spontaneous emission ( se ) rate enhancement around the zpl .+ in an early demonstration of a whispering - gallery - mode resonator , diamond microdisks were fabricated into nanocrystalline diamond via fib milling .resonant modes with -factors of about 100 were observed near the nv zpl around 637 nm via detection of photoluminescence and near 1550 nm via evanescent fiber coupling .suspended single crystal diamond microdisks were fabricated by implantation of 180 kev energy boron ions to create subsurface damage and homoepitaxial diamond overgrowth was applied for required microdisk thickness .the ion - damaged layer was selectively removed by electrochemical etching and the disks were patterned via icp - rie .the first resonant enhancement of the nv zpl was also realized with a single crystal diamond resonator that was patterned via ebl and oxygen rie .a 10-fold enhancement was demonstrated , marking an important step of controlling the nv emission via coupling to optical resonators .further work with whispering gallery resonators coupled to waveguides will be discussed in the context of hybrid photonic systems in section [ sec : hybrid ] .the first fabrication and optical characterization of photonic crystal cavities were demonstrated with nanocrystalline diamond , and fundamental cavity modes near the nv zpl with q - factors up to 585 were observed .1d - nanobeam photonic crystal cavities with theoretical q - factors of up to 10 were introduced and fabricated via two different fib milling methods .the first demonstration of coupling a single defect center to a phc cavity was demonstrated by fib milling of single crystal diamond for an l7 cavity design and a siv center with fluorescence intensity enhancement by a factor of 2.8 .the demonstration of the 70-fold enhancement of the zpl transition rate of a cavity - coupled nv marked an important step for cavity qed with defect centers in diamond , realized with a photonic crystal cavity fabricated in monocrystalline diamond using standard semiconductor fabrication techniques .the coupled nv had a single - scan linewidth of a few ghz , determined with photoluminescence excitation measurements . by coupling a single nv to a waveguide based 1d - nanobeam photonic crystal cavity with q - factors up to 6000, enhancement of the nv zpl fluorescence by a factor of was demonstrated .such waveguide based 1d - cavities enable the direct integration into a photonic architecture and are therefore interesting for efficient coupling and transmission experiments .a 1d - nanocavity fabricated by transferred hard mask lithography and oxygen rie ( fig .[ fig : thinfilmphc ] ) enabled the demonstration of q - factors approaching 10,000 , enhancement of the zpl transition rate of , and a beta - factor , indicating operation in the strong purcell regime .furthermore , electron spin manipulation was realized for the first time for cavity - coupled nv with coherence times exceeding 200 with on - chip microwave striplines for efficient spin control , providing a long - lived quantum system .this spin - photon interface experimentally validates the promise of long spin coherence nv-cavity systems for scalable quantum repeaters and quantum networks . due to the experimental difficulties of creating large scale , high - quality , membranes of uniform and controllable thickness ,many groups have begun to explore the fabrication of photonic crystal cavities in bulk diamond .first implementations focused on creating membranes through ion damage of the diamond layer , and subsequent etching using fib milling , as introduced in sec . [ sec : sec : thinfilm ] .this enabled photonic crystals etched from bulk diamond with q near the nv zpl .however , as discussed previously the lattice damage in this method is currently high and will most likely hinder the spin and spectral properties of defect centers .three step tilted fib milling , in which the stage is tilted with respect to the ion milling beam in two directions to achieve an undercut , and a last un - tilted mill step is used to etch the photonic crystal holes enabled nanobeam cavities which were separated from the bulk .this technology enabled cavities with q s of a few hundred , matching theoretical predictions across multiple modes .simulations also showed that the triangular geometry that results from the tilted fib etching can support high qs ( ) .while fib processing of diamond has enabled cavities in membrane and bulk diamond , it is an inherently low output process , as it is a serial etching process .the development of angled rie etching with an angled cage allowed the extension of rie etching techniques to triangular nanobeam cavities in bulk diamond . with this technique, cavities were realized with resonances near the nv zpl of a few thousand .high q cavities ( loaded q ) have also been demonstrated in the infrared .in contrast to diamond stand - alone devices , we will discuss diamond photonic systems comprising more than one optical element .this can , for example , be a ring - resonator coupled to waveguide .we make the distinction from stand - alone devices , as the extension of photonic systems can lead to complex architectures that will enable on - chip functionalities such as generation , entanglement , routing , and gating .two approaches have been demonstrated for the fabrication of all - diamond integrated photonic architectures . the first one is based on photonic elements etched into diamond - on - insulator or free - standing diamond thin - films .the second approach is based on fabricating monolithic , suspended 3d structures into bulk diamond samples . for more details on the patterning methods, please refer to sec .[ sec : diamondpatterning ] .silicon - on - insulator fabrication technology has enabled many high quality photonic structures by providing high index contrast and a stable platform .diamond films suspended over air or placed on sio substrates provides similar index contrast and stability , and photonic device designs have been shown to translate with ease .the main disadvantage is the requirement of large , single - crystal diamond films with uniform thickness , which are difficult to fabricate as discussed in sec .[ sec : sec : thinfilm ] . despite this challenge, first proof - of - principle experiments have been demonstrated .haussmann et al .realized a nanophotonic network in a single crystal diamond film by integrating a high - q ring resonator ( ) with an optical waveguide containing grating in- and outcouplers .a single nv center inside the ring - resonator was coupled to its mode and single photon generation and routing was demonstrated with an overall photon extraction efficiency of about 10% . in a similar system ,faraon et al . showed strong enhancement of the zero - phonon line of nv centers coupled to the ring resonance . by replacing grating couplers with polymer spot - size converters at the end of the diamond waveguides , off - chip fiber coupling as low as 1db / facet were demonstrated for wavelengths around 1550 nm .the integrated race - track resonators had quality factors up to and signatures of nonlinear effects were observed . by coupling a second waveguide to a ring - resonator and by locally tuning the temperature of the diamond waveguides an optical - thermal switch was realized with switching efficiencies of 31% at the drop and 73% at the through port .initial attempts to fabricate photonic components in bulk diamond used a combination of patterning techniques ( ion - induced damage for structure undercut , rie for large pattern transfer , and fib for local pattern transfer ) .two mode ridge waveguides in type-1b single crystal diamond were produced , though the damage caused by the ion implantation suggests that this technique can not be adopted for quantum technologies .on the other hand , triangular rie etching ( as introduced in sec .[ sec : angetching ] ) is well suited to pattern photonic systems into bulk diamond directly .free - standing components of a photonic integrated circuit , including optical waveguides and photonic crystal and microdisk cavities have been fabricated in a proof - of - principle demonstration .it was later shown in simulation that ` s - bend ' structures can be used in conjunction with the triangular rie etching to add low - loss connection points to the bulk , and thus extend the length of the waveguides .more work will have to be done in this area to increase the structural stability , operation at visible wavelengths , and fabrication yield of these bulk photonic integrated circuits .hybrid photonic systems combine and exploit the advantages of multiple systems to achieve more functionality than any single isolated system .random assembly , self assembly , bottom up fabrication , and manual assembly have all been used to access plasmonic and photonic regimes that would otherwise be inaccessible . here , we will review the integration of diamond with other semiconductor material systems to gain access to high quality cavity and photonic integrated circuit systems that are currently difficult to achieve with high - yield in diamond .integration into cavities has allowed for emission enhancement of defects in un - patterned diamond slabs and nanodiamonds , and integration into waveguides has allowed for high collection efficiency and on - chip routing of emitted light . due toits large bandgap , low intrinsic fluorescence , and ease of fabrication , gallium phosphide ( gap ) has been used to enhance single defect emission in diamond .the float down of pre - patterned gap microdisks onto an unpatterned bulk diamond allowed gap to be used as both an etch mask and a higher index guiding material .these high quality hybrid cavities supported whispering gallery mode resonances with and loaded q factor of 3,800 . the structure and the mode profile can be seen in fig .[ fig : gap ] .however , the placement of nv with respect to the optical mode is random , and single nv enhancement was not shown . the same float down method was used to pattern ring resonators ( ) on a diamond sample with a lower density of nv centers .tuning the cavity resonance at 10k , and measuring the cavity - coupled emission through a tapered fiber showed enhancement of multiple single nv zpls .however , the purcell effect was limited to , again due to placement with respect to the mode maximum , as well as the large volume of the resonator .silica whispering gallery mode resonators ( wgms ) have also been coupled to diamond structures with nv centers in order to exploit the ultra - high quality factors possible with wgms , although the mode volumes are high . a deterministic coupling approach in which silica microspheres are brought into contact with integrated 200 nm diamond nanopillars with nanometer precisionallows the preservation of nv bulk properties while maintaining high quality factors ( ) for the composite system .high quality factor resonators can also be achieved with distributed bragg reflector ( dbr ) cavities .micro fabricated mirrors can facilitate high finesse , small mode volume cavities which are tunable post - fabrication . integrating these dbr mirrors into a fiber - based systemmaintains the cavity properties , while achieving high coupling into a useable single fiber mode .the coupling of a single nv center in a nanodiamond to the field maximum of a tunable high finesse dbr cavity ( at nm ) via lateral positing allowed study of the phonon - assisted transitions of the nv . a similar setup with a fiber - based dbr cavity elucidated the full scaling laws of purcell enhancement for the nv emission spectrum .both setups are projected to reach the strong purcell regime at cold temperatures when the coupling between the nv zpl and the cavity field is larger .a fiber - based dbr cavity in which both input and output are coupled directly to fiber modes has increased the spectral photon rate density by orders of magnitude , an important step for quantum information processing .these fiber - based dbr cavities have also been engineered to maintain high finesse and quality factors ( ) even while containing thick diamond membranes ( m ) which can contain spectrally stable nv centers with long coherence times , unlike nanodiamonds .+ hybrid systems have also been used to enhance the coupling of the nv emission into traveling - wave modes for enhanced detection rates , and collection into photonic modes that can be then manipulated and interfered to create larger networks .early work has concentrated on the hybrid diamond - gap systems discussed in section [ sec : hybcav ] , demonstrating the evanescent coupling of nv center emission to gap multimode waveguides which suffered from high loss and fluorescence .theoretical work demonstrated the possibility of single - mode operation with better coupling between the nv emission and the gap waveguide mode , along with a scheme for coupling nv centers more than 50 nm from the center to high q nanobeam cavities in the gap layer .recent work on gap - diamond hybrid systems has shown the waveguide - coupling of single nv zero phonon line emission into disk resonators with estimated high zero - phonon line emission rates into one direction of the waveguide .this approach takes advantage of well - established thin film growth of and patterning methods of gap , and enables the use of mainly unpatterned diamond to preserve the defect properties .however , this approach is limited by the reduced coupling of the defects to the waveguide mode .while this is mediated by the addition of a cavity as theoretically and experimentally demonstrated , there exists no proposed way to locate the defect in the cavity mode maximum .one approach to overcome this limitation is to fabricate diamond single - mode waveguides with nv directly at the mode maximum .such a waveguide can then be coupled with almost unity coupling efficiency to a pre - fabricated sin photonic waveguide architecture with a suspended coupling region as shown in fig .[ fig : sin ] . for a diamond waveguide with a 200 x 200nm cross section, it was determined that a dipole oriented perpendicularly to the propagation direction will couple of the emission to the single optical waveguide mode . moreover , with the optimized tapering of both the diamond waveguide and the waveguide in the underlying photonic circuitry , up to of the light in the diamond waveguide will be coupled to the single mode sin waveguide . in experiment, the hybrid structure shows that 3.5 times more photons emitted by the nv are collected into one direction of the waveguide than into a free space 0.95na objective , even with non - optimized diamond tapering regions .a similar hybrid approach can also be used to efficiently couple light from single emitters in diamond to single - mode silica fibers . as in the photonic integrated circuit , tapers enable an adiabatic mode transfer between the single mode diamond waveguide and guided mode in the tapered silica fiber , theoretically enabling unity power transfer from diamond to fiber waveguide .experimentally , an overall collection efficiency between 16% and 37% into a single optical mode was demonstrated , with a single photon count rate of more than 700kcps in saturation .another important advantage of a hybrid bottom - up approach is that it enables the building of large - scale systems with almost unity probability , overcoming the stochastic defect center creation process with inherently low yield of high - quality quantum nodes .pre - selection of the best diamond waveguides from an array of fabricated waveguides guarantees that every node in the final integrated network will contain a single defect with the desired spectral and spin properties , as well as being well coupled to the optical mode .this enables a linear scaling in the number of fabrication attempts necessary to create a quantum network with the desired number of nodes . to increase the detection efficiency of single photon emitted by defects in diamond , efforts are underway to fabricate superconducting nanowire single photon detectors ( snspds ) directly on diamond or to integrated them with hybrid waveguide architectures .niobium titanium nitritide ( nbtn ) snspds have been fabricated on single - crystal diamond substrates and have been shown to have good detection properties .niobium nitride ( nbn ) snspds fabricated directly on waveguides in polycrystalline thin - film diamond grown on oxide show high detection efficiencies up to 66% at 1550 nm combined with low dark count rates , and timing resolution of 190ps .as discussed in this review , recent advances in diamond synthesis and fabrication have enabled high - quality nano- and microphotonic devices for increased photon collection and tailored light - matter interaction . despite this progress , there are still fundamental challenges to be overcome on the way towards more complex qi implementations based on these diamond photonic nanostructures .high fidelity qi applications require long spin coherence times and lifetime limited emission linewidths .as noted in the review , many diamond growth and fabrication schemes have been tailored to bolster spin coherence times .however , more development must be done to produce nanostructures with both high intrinsic quality ( high q values and low mode volumes ) and high defect quality ( long spin coherence times and lifetime limited emission linewidths ) .in particular the latter has up to date been limited to a factor of about 30 of the fluorescence lifetime - limit .it is commonly believed that both crystal lattice defects induced by dry etching and increased surface area near defects due to nanofabrication lead to the degradation of defect properties .therefore , suitable nanofabrication technology and proper surface termination methods must be developed .furthermore , the realization of large - scale quantum photonic systems will depend on _ scalable fabrication _ techniques and on _ tunability _ stark shift control to bring multiple defect transitions ( e.g. the nv zpl ) to the same frequency , and cavity tuning to match the resonance wavelength with the defect transition frequency without degrading the cavity q. finally , for high fidelity information processing , error correction is required . to establish error - corrected logical qubit nodes in a large quantum photonic processor , advances must be made in the high yield creation of coupled defect centers as well as control over the surrounding nuclear environment as resource to potentially store quantum states longer than an individual quantum memory is able to .the progress presented in this review proves the viability of diamond based nanophotonic systems in qi and sensing , and further progress will enable enhanced sensing as well as scalable solid state quantum networks .fabrication and experiments were supported in part by the air force office of scientific research pecase ( afosr grant no . fa9550 - 11 - 1 - 0014 ) , the afosr quantum memories muri , and the u.s .army research laboratory ( arl ) center for distributed quantum information ( cdqi ) .research was carried out in part at the center for functional nanomaterials , brookhaven national laboratory , which is supported by the u.s .department of energy , office of basic energy sciences , under contract no .de - sc0012704 .t.s . was supported in part by the cdqi .e.h.c . was supported by the nasa office of the chief technologist s space technology research fellowship .m.e.t . was supported in part by the afosr quantum memories muri .m.e.t . was supported by the nsf igert program interdisciplinary quantum information science and engineering ( iquise ) .is supported by the afosr pecase .k. nemoto , m. trupke , s. j. devitt , a. m. stephens , k. buczak , t. nobauer , m. s. everitt , j. schmiedmayer , and w. j. munro , `` photonic architecture for scalable quantum information processing in nv - diamond , '' arxiv:1309.4277 [ quant - ph ] ( 2013 ) . p. c. maurer , g. kucsko , c. latta , l. jiang , n. y. yao , s. d. bennett , f. pastawski , d. hunger , n. chisholm , m. markham , d. j. twitchen , j. i. cirac , and m. d. lukin , `` room - temperature quantum bit memory exceeding one second , '' science * 336 * , 12831286 ( 2012 ) .j. r. maze , p. l. stanwix , j. s. hodges , s. hong , j. m. taylor , p. cappellaro , l. jiang , m. v. g. dutt , e. togan , a. s. zibrov , a. yacoby , r. l. walsworth , and m. d. lukin , `` nanoscale magnetic sensing with an individual electronic spin in diamond , '' nature * 455 * , 644647 ( 2008 ) .f. dolde , h. fedder , m. w. doherty , t. nbauer , f. rempp , g. balasubramanian , t. wolf , f. reinhard , l. c. l. hollenberg , f. jelezko , and j. wrachtrup , `` electric - field sensing using single diamond spins , '' nature physics * 7 * , 459463 ( 2011 ) .p. neumann , i. jakobi , f. dolde , c. burk , r. reuter , g. waldherr , j. honert , t. wolf , a. brunner , j. h. shim , d. suter , h. sumiya , j. isoya , and j. wrachtrup , `` high - precision nanoscale temperature sensing using single defects in diamond , '' nano letters * 13 * , 27382742 ( 2013 ) .h. clevenson , m. e. trusheim , c. teale , t. schrder , d. braje , and d. englund , `` broadband magnetometry and temperature sensing with a light - trapping diamond waveguide , '' nature physics * 11 * , 393397 ( 2015 ) .d. le sage , l. m. pham , n. bar - gill , c. belthangady , m. d. lukin , a. yacoby , and r. l. walsworth , `` efficient photon detection from color centers in a diamond optical waveguide , '' physical review b * 85 * , 121202 ( 2012 ) .j. m. taylor , p. cappellaro , l. childress , l. jiang , d. budker , p. r. hemmer , a. yacoby , r. walsworth , and m. d. lukin , `` high - sensitivity diamond magnetometer with nanoscale resolution , '' nature physics * 4 * , 810816 ( 2008 ) .i. lovchinsky , a. o. sushkov , e. urbach , n. p.d. leon , s. choi , k. d. greve , r. evans , r. gertner , e. bersin , c. mller , l. mcguinness , f. jelezko , r. l. walsworth , h. park , and m. d. lukin , `` nuclear magnetic resonance detection and spectroscopy of single proteins using quantum logic , '' science * 351 * , 836841 ( 2016 ) .j. p. hadden , j. p. harrison , a. c. stanley - clarke , l. marseglia , y. l. d. ho , b. r. patton , j. l. obrien , and j. g. rarity , `` strongly enhanced photon collection from diamond defect centers under microfabricated integrated solid immersion lenses , '' applied physics letters * 97 * , 2419012419013 ( 2010 ) .a. faraon , p. e. barclay , c. santori , k .-m . c. fu , and r. g. beausoleil , `` resonant enhancement of the zero - phonon emission from a colour centre in a diamond cavity , '' nature photonics * 5 * , 301305 ( 2011 ) .j. riedrich - mller , l. kipfstuhl , c. hepp , e. neu , c. pauly , f. mcklich , a. baur , m. wandt , s. wolff , m. fischer , s. gsell , m. schreck , and c. becher , `` one- and two - dimensional photonic crystal microcavities in single crystal diamond , '' nature nanotechnology * 7 * , 6974 ( 2012 ) .l. li , t. schrder , e. h. chen , m. walsh , i. bayn , j. goldstein , o. gaathon , m. e. trusheim , m. lu , j. mower , m. cotlet , m. l. markham , d. j. twitchen , and d. englund , `` coherent spin control of a nanocavity - enhanced qubit in diamond , '' nature communications * 6 * , 6173 ( 2015 ) .b. j. m. hausmann , m. khan , y. zhang , t. m. babinec , k. martinick , m. mccutcheon , p. r. hemmer , and m. lonar , `` fabrication of diamond nanowires for quantum information processing applications , '' diamond and related materials * 19 * , 621629 ( 2010 ) .l. li , i. bayn , m. lu , c .- y .nam , t. schrder , a. stein , n. c. harris , and d. englund , `` nanofabrication on unconventional substrates using transferred hard masks , '' scientific reports * 5 * , 7802 ( 2015 ) .s. schietinger , m. barth , t. aichele , and o. benson , `` plasmon - enhanced single photon emission from a nanoassembled metal - diamond hybrid structure at room temperature , '' nano letters * 9 * , 16941698 ( 2009 ) .m. barth , s. schietinger , t. schrder , t. aichele , and o. benson , `` controlled coupling of nv defect centers to plasmonic and photonic nanostructures , '' journal of luminescence * 130 * , 16281634 ( 2010 ) .j. t. choy , i. bulu , b. j. m. hausmann , e. janitz , i .-c . huang , and m. lonar , `` spontaneous emission and collection efficiency enhancement of single emitters in diamond via plasmonic cavities and gratings , '' applied physics letters * 103 * , 161101 ( 2013 ) .m. e. trusheim , l. li , a. laraoui , e. h. chen , h. bakhru , t. schrder , o. gaathon , c. a. meriles , and d. englund , `` scalable fabrication of high purity diamond nanocrystals with long - spin - coherence nitrogen vacancy centers , '' nano letters * 14 * , 3236 ( 2014 ) .r. s. balmer , j. r. brandon , s. l. clewes , h. k. dhillon , j. m. dodson , i. friel , p. n. inglis , t. d. madgwick , m. l. markham , t. p. mollart , n. perkins , g. a. scarsbrook , d. j. twitchen , a. j. whitehead , j. j. wilman , and s. m. woollard , `` chemical vapour deposition synthetic diamond : materials , technology and applications , '' journal of physics : condensed matter * 21 * , 364221 ( 2009 ) .g. balasubramanian , p. neumann , d. twitchen , m. markham , r. kolesov , n. mizuochi , j. isoya , j. achard , j. beck , j. tissler , v. jacques , p. r. hemmer , f. jelezko , and j. wrachtrup , `` ultralong spin coherence time in isotopically engineered diamond , '' nature materials * 8 * , 383 ( 2009 ) .t. teraji , t. taniguchi , s. koizumi , k. watanabe , m. liao , y. koide , and j. isoya , `` chemical vapor deposition of 12 c isotopically enriched polycrystalline diamond , '' japanese journal of applied physics * 51 * , 090104 ( 2012 ) .a. gruber , a. drbenstedt , c. tietz , l. fleury , j. wrachtrup , and c. v. borczyskowski , `` scanning confocal optical microscopy and magnetic resonance on single defect centers , '' science * 276 * , 20122014 ( 1997 ) .e. togan , y. chu , a. s. trifonov , l. jiang , j. maze , l. childress , m. v. g. dutt , a. s. srensen , p. r. hemmer , a. s. zibrov , and m. d. lukin , `` quantum entanglement between an optical photon and a solid - state spin qubit , '' nature * 466 * , 730734 ( 2010 ) .h. bernien , b. hensen , w. pfaff , g. koolstra , m. s. blok , l. robledo , t. h. taminiau , m. markham , d. j. twitchen , l. childress , and r. hanson , `` heralded entanglement between solid - state qubits separated by three metres , '' nature * 497 * , 8690 ( 2013 ) . w. pfaff , b. j. hensen , h. bernien , s. b. v. dam , m. s. blok , t. h. taminiau , m. j. tiggelman , r. n. schouten , m. markham , d. j. twitchen , and r. hanson , `` unconditional quantum teleportation between distant solid - state quantum bits , '' science * 345 * , 532535 ( 2014 ). b. hensen , h. bernien , a. e. drau , a. reiserer , n. kalb , m. s. blok , j. ruitenberg , r. f. l. vermeulen , r. n. schouten , c. abelln , w. amaya , v. pruneri , m. w. mitchell , m. markham , d. j. twitchen , d. elkouss , s. wehner , t. h. taminiau , and r. hanson , `` loophole - free bell inequality violation using electron spins separated by 1.3 kilometres , '' nature * 526 * , 682686 ( 2015 ) .r. yang , c. a. zorman , and p. x. l. feng , `` high frequency torsional - mode nanomechanical resonators enabled by very thin nanocrystalline diamond diaphragms , '' diamond and related materials * 54 * , 1925 ( 2015 ) .a. sipahigil , k. jahnke , l. rogers , t. teraji , j. isoya , a. zibrov , f. jelezko , and m. lukin , `` indistinguishable photons from separated silicon - vacancy centers in diamond , '' physical review letters * 113 * , 113602 ( 2014 ) .s. furuyama , k. tahara , t. iwasaki , m. shimizu , j. yaita , m. kondo , t. kodera , and m. hatano , `` improvement of fluorescence intensity of nitrogen vacancy centers in self - formed diamond microstructures , '' applied physics letters * 107 * , 163102 ( 2015 ) .d. sovyk , v. ralchenko , m. komlenok , a. a. khomich , v. shershulin , v. vorobyov , i. vlasov , v. konov , and a. akimov , `` fabrication of diamond microstub photoemitters with strong photoluminescence of siv color centers : bottom - up approach , '' applied physics a * 118 * , 1721 ( 2014 ) .i. bayn , b. meyler , a. lahav , j. salzman , r. kalish , b. a. fairchild , s. prawer , m. barth , o. benson , t. wolf , p. siyushev , f. jelezko , and j. wrachtrup , `` processing of photonic crystal nanocavity for quantum information in diamond , '' diamond and related materials * 20 * , 937943 ( 2011 ) .m. lesik , p. spinicelli , s. pezzagna , p. happel , v. jacques , o. salord , b. rasser , a. delobbe , p. sudraud , a. tallaire , j. meijer , and j .- f .roch , `` maskless and targeted creation of arrays of colour centres in diamond using focused ion beam technology , '' physica status solidi ( a ) * 210 * , 20552059 ( 2013 ) .m. j. burek , n. p.de leon , b. j. shields , b. j. m. hausmann , y. chu , q. quan , a. s. zibrov , h. park , m. d. lukin , and m. lonar , `` free - standing mechanical and photonic nanostructures in single - crystal diamond , '' nano letters * 12 * , 60846089 ( 2012 ) .m. j. burek , y. chu , m. s. z. liddy , p. patel , j. rochman , s. meesala , w. hong , q. quan , m. d. lukin , and m. lonar , `` high quality - factor optical nanocavities in bulk single - crystal diamond , '' nature communications * 5 * , 5718 ( 2014 ) .t. a. kennedy , j. s. colton , j. e. butler , r. c. linares , and p. j. doering , `` long coherence times at 300 k for nitrogen - vacancy center spins in diamond grown by chemical vapor deposition , '' applied physics letters * 83 * , 41904192 ( 2003 ) .k. ohno , f. j. heremans , l. c. bassett , b. a. myers , d. m. toyli , a. c. b. jayich , c. j. palmstrm , and d. d. awschalom , `` engineering shallow spins in diamond with nitrogen delta - doping , '' applied physics letters * 101 * , 082413 ( 2012 ) .l. j. rogers , k. d. jahnke , t. teraji , l. marseglia , c. mller , b. naydenov , h. schauffert , c. kranz , j. isoya , l. p. mcguinness , and f. jelezko , `` multiple intrinsically identical single - photon emitters in the solid state , '' nature communications * 5 * , 4739 ( 2014 ) .j. c. lee , d. o. bracher , s. cui , k. ohno , c. a. mclellan , x. zhang , p. andrich , b. alemn , k. j. russell , a. p. magyar , i. aharonovich , a. b. jayich , d. awschalom , and e. l. hu , `` deterministic coupling of delta - doped nitrogen vacancy centers to a nanobeam photonic crystal cavity , '' applied physics letters * 105 * , 261101 ( 2014 ) .t. teraji , t. yamamoto , k. watanabe , y. koide , j. isoya , s. onoda , t. ohshima , l. j. rogers , f. jelezko , p. neumann , j. wrachtrup , and s. koizumi , `` homoepitaxial diamond film growth : high purity , high crystalline quality , isotopic enrichment , and single color center formation , '' physica status solidi ( a ) * 212 * , 23652384 ( 2015 ) . s. gsell , t. bauer , j. goldfu , m. schreck , and b. stritzker , `` a route to diamond wafers by epitaxial deposition on silicon via iridium / yttria - stabilized zirconia buffer layers , '' applied physics letters * 84 * , 45414543 ( 2004 ) .m. schreck , f. hrmann , h. roll , j. k. n. lindner , and b. stritzker , `` diamond nucleation on iridium buffer layers and subsequent textured growth : a route for the realization of single - crystal diamond films , '' applied physics letters * 78 * , 192194 ( 2001 ) .m. schreck , a. schury , f. hrmann , h. roll , and b. stritzker , `` mosaicity reduction during growth of heteroepitaxial diamond films on iridium buffer layers : experimental results and numerical simulations , '' journal of applied physics * 91 * , 676685 ( 2002 ) .p. olivero , s. rubanov , p. reichart , b. c. gibson , s. t. huntington , j. r. rabeau , a. d. greentree , j. salzman , d. moore , d. n. jamieson , and s. prawer , `` characterization of three - dimensional microstructures in single - crystal diamond , '' diamond and related materials * 15 * , 16141621 ( 2006 ) .l. li , m. trusheim , o. gaathon , k. kisslinger , c .- j .cheng , m. lu , d. su , x. yao , h .- c .huang , i. bayn , a. wolcott , r. m. o. jr , and d. englund , `` reactive ion etching : optimized diamond membrane fabrication for transmission electron microscopy , '' journal of vacuum science & technology b * 31 * , 06ff01 ( 2013 ) .j. s. hodges , l. li , m. lu , e. h. chen , m. e. trusheim , s. allegri , x. yao , o. gaathon , h. bakhru , and d. englund , `` long - lived nv - spin coherence in high - purity diamond membranes , '' new journal of physics * 14 * , 093004 ( 2012 ) .c. f. wang , r. hanson , d. d. awschalom , e. l. hu , t. feygelson , j. yang , and j. e. butler , `` fabrication and characterization of two - dimensional photonic crystal microcavities in nanocrystalline diamond , '' applied physics letters * 91 * , 201112 ( 2007 ) .b. a. fairchild , p. olivero , s. rubanov , a. d. greentree , f. waldermann , r. a. taylor , i. walmsley , j. m. smith , s. huntington , b. c. gibson , d. n. jamieson , and s. prawer , `` fabrication of ultrathin single - crystal diamond membranes , '' advanced materials * 20 * , 47934798 ( 2008 ) .a. p. magyar , j. c. lee , a. m. limarga , i. aharonovich , f. rol , d. r. clarke , m. huang , and e. l. hu , `` fabrication of thin , luminescent , single - crystal diamond membranes , '' applied physics letters * 99 * , 081913 ( 2011 ) .i. aharonovich , j. c. lee , a. p. magyar , b. b. buckley , c. g. yale , d. d. awschalom , and e. l. hu , `` homoepitaxial growth of single crystal diamond membranes for quantum information processing , '' advanced materials * 24 * , op54op59 ( 2012 ) . o. gaathon , j. s. hodges , e. h. chen , l. li , s. bakhru , h. bakhru , d. englund , and r. m. osgood jr . , `` planar fabrication of arrays of ion - exfoliated single - crystal - diamond membranes with nitrogen - vacancy color centers , '' optical materials * 35 * , 361365 ( 2013 ) .j. t. choy , b. j. m. hausmann , t. m. babinec , i. bulu , m. khan , p. maletinsky , a. yacoby , and m. lonar , `` enhanced single - photon emission from a diamond - silver aperture , '' nature photonics * 5 * , 738743 ( 2011 ) .j. k. w. yang , v. anant , and k. k. berggren , `` enhancing etch resistance of hydrogen silsesquioxane via postdevelop electron curing , '' journal of vacuum science & technology b : microelectronics and nanometer structures * 24 * , 3157 ( 2006 ) .i. bayn , s. mouradian , l. li , j. a. goldstein , t. schrder , j. zheng , e. h. chen , o. gaathon , m. lu , a. stein , c. a. ruggiero , j. salzman , r. kalish , and d. englund , `` fabrication of triangular nanobeam waveguide networks in bulk diamond using single - crystal silicon hard masks , '' applied physics letters * 105 * , 211101 ( 2014 ) .r. u. a. khan , b. l. cann , p. m. martineau , j. samartseva , j. j. p. freeth , s. j. sibley , c. b. hartland , m. e. newton , h. k. dhillon , and d. j. twitchen , `` colour - causing defects and their related optoelectronic transitions in single crystal cvd diamond , '' journal of physics : condensed matter * 25 * , 275801 ( 2013 ) .j. martin , r. wannemacher , j. teichert , l. bischoff , and b. khler , `` generation and detection of fluorescent color centers in diamond with submicron resolution , '' applied physics letters * 75 * , 30963098 ( 1999 ) .c. yang , d. n. jamieson , c. pakes , s. prawer , a. dzurak , f. stanley , p. spizziri , l. macks , e. gauja , and r. g. clark , `` single phosphorus ion implantation into prefabricated nanometre cells of silicon devices for quantum bit fabrication , '' japanese journal of applied physics * 42 * , 41244128 ( 2003 ) .c. a. mclellan , b. a. myers , s. kraemer , k. ohno , d. d. awschalom , and a. c. b. jayich , `` deterministic formation of highly coherent nitrogen - vacancy centers using a focused electron irradiation technique , '' arxiv:1512.08821 [ cond - mat ] ( 2015 ) .i. bayn , e. h. chen , m. e. trusheim , l. li , t. schrder , o. gaathon , m. lu , a. stein , m. liu , k. kisslinger , h. clevenson , and d. englund , `` generation of ensembles of individually resolvable nitrogen vacancies using nanometer - scale apertures in ultrahigh - aspect ratio planar implantation masks , '' nano letters * 15 * , 17511758 ( 2015 ) .v. m. acosta , e. bauch , m. p. ledbetter , c. santori , k .-m . c. fu , p. e. barclay , r. g. beausoleil , h. linget , j. f. roch , f. treussart , s. chemerisov , w. gawlik , and d. budker , `` diamonds with a high density of nitrogen - vacancy centers for magnetometry applications , '' physical review b * 80 * , 115202 ( 2009 ) .j. o. orwa , c. santori , k. m. c. fu , b. gibson , d. simpson , i. aharonovich , a. stacey , a. cimmino , p. balog , m. markham , d. twitchen , a. d. greentree , r. g. beausoleil , and s. prawer , `` engineering of nitrogen - vacancy color centers in high purity diamond by ion implantation and annealing , '' journal of applied physics * 109 * , 083530 ( 2011 ) .g. davies and m. f. hamer , `` optical studies of the 1.945 ev vibronic band in diamond , '' proceedings of the royal society of london a : mathematical , physical and engineering sciences * 348 * , 285298 ( 1976 ) .p. dek , b. aradi , m. kaviani , t. frauenheim , and a. gali , `` formation of nv centers in diamond : a theoretical study based on calculated transitions and migration of nitrogen and vacancy related defects , '' physical review b * 89 * , 075203 ( 2014 ) .j. schwartz , s. aloni , d. f. ogletree , and t. schenkel , `` effects of low - energy electron irradiation on formation of nitrogen vacancy centers in single - crystal diamond , '' new journal of physics * 14 * , 043024 ( 2012 ) .j. schwartz , s. aloni , d. f. ogletree , m. tomut , m. bender , d. severin , c. trautmann , i. w. rangelow , and t. schenkel , `` local formation of nitrogen - vacancy centers in diamond by swift heavy ions , '' journal of applied physics * 116 * , 214107 ( 2014 ) . c. santori , p. e. barclay , k .-m . c. fu , and r. g. beausoleil , `` vertical distribution of nitrogen - vacancy centers in diamond formed by ion implantation and annealing , '' physical review b * 79 * , 125313 ( 2009 ) .k. ohno , f. j. heremans , c. f. d. l. casas , b. a. myers , b. j. alemn , a. c. b. jayich , and d. d. awschalom , `` three - dimensional localization of spins in diamond using 12c implantation , '' applied physics letters * 105 * , 052406 ( 2014 ) .b. naydenov , f. reinhard , a. lmmle , v. richter , r. kalish , u. f. s. dhaenens - johansson , m. newton , f. jelezko , and j. wrachtrup , `` increasing the coherence time of single electron spins in diamond by high temperature annealing , '' applied physics letters * 97 * , 242511 ( 2010 ) .r. e. evans , a. sipahigil , d. d. sukachev , a. s. zibrov , and m. d. lukin , `` coherent optical emitters in diamond nanostructures via ion implantation , '' arxiv:1512.03820 [ cond - mat , physics : quant - ph ] ( 2015 ) .y. chu , n. de leon , b. shields , b. hausmann , r. evans , e. togan , m. j. burek , m. markham , a. stacey , a. zibrov , a. yacoby , d. twitchen , m. lonar , h. park , p. maletinsky , and m. lukin , `` coherent optical transitions in implanted nitrogen vacancy centers , '' nano letters * 14 * , 19821986 ( 2014 ) .p. tamarat , t. gaebel , j. r. rabeau , m. khan , a. d. greentree , h. wilson , l. c. l. hollenberg , s. prawer , p. hemmer , f. jelezko , and j. wrachtrup , `` stark shift control of single optical centers in diamond , '' physical review letters * 97 * , 083002 ( 2006 ) .f. c. waldermann , p. olivero , j. nunn , k. surmacz , z. y. wang , d. jaksch , r. a. taylor , i. a. walmsley , m. draganski , p. reichart , a. d. greentree , d. n. jamieson , and s. prawer , `` creating diamond color centers for quantum optical applications , '' diamond and related materials * 16 * , 18871895 ( 2007 ) .z. huang , w .- d .li , c. santori , v. m. acosta , a. faraon , t. ishikawa , w. wu , d. winston , r. s. williams , and r. g. beausoleil , `` diamond nitrogen - vacancy centers created by scanning focused helium ion beam and annealing , '' applied physics letters * 103 * , 081906 ( 2013 ) .d. mccloskey , d. fox , n. ohara , v. usov , d. scanlan , n. mcevoy , g. s. duesberg , g. l. w. cross , h. z. zhang , and j. f. donegan , `` helium ion microscope generated nitrogen - vacancy centres in type ib diamond , '' applied physics letters * 104 * , 031109 ( 2014 ) .p. andrich , b. j. alemn , j. c. lee , k. ohno , c. f. de las casas , f. j. heremans , e. l. hu , and d. d. awschalom , `` engineered micro- and nanoscale diamonds as mobile probes for high - resolution sensing in fluid , '' nano letters * 14 * , 49594964 ( 2014 ) .t. gaebel , m. domhan , i. popa , c. wittmann , p. neumann , f. jelezko , j. r. rabeau , n. stavrias , a. d. greentree , s. prawer , j. meijer , j. twamley , p. r. hemmer , and j. wrachtrup , `` room - temperature coherent coupling of single spins in diamond , '' nature physics * 2 * , 408413 ( 2006 ) .g. waldherr , y. wang , s. zaiser , m. jamali , t. schulte - herbrggen , h. abe , t. ohshima , j. isoya , j. f. du , p. neumann , and j. wrachtrup , `` quantum error correction in a solid - state hybrid spin register , '' nature * 506 * , 204207 ( 2014 ) . t. h. taminiau , j. cramer , t. v. d. sar , v. v. dobrovitski , and r. hanson , `` universal control and error correction in multi - qubit spin registers in diamond , '' nature nanotechnology * 9 * , 171176 ( 2014 ) .j. meijer , b. burchard , m. domhan , c. wittmann , t. gaebel , i. popa , f. jelezko , and j. wrachtrup , `` generation of single color centers by focused nitrogen implantation , '' applied physics letters * 87 * , 261909 ( 2005 ) .j. r. rabeau , p. reichart , g. tamanyan , d. n. jamieson , s. prawer , f. jelezko , t. gaebel , i. popa , m. domhan , and j. wrachtrup , `` implantation of labelled single nitrogen vacancy centers in diamond using n15 , '' applied physics letters * 88 * , 023113 ( 2006 ) .s. pezzagna , d. rogalla , d. wildanger , j. meijer , and a. zaitsev , `` creation and nature of optical centres in diamond for single - photon emission overview and critical remarks , '' new journal of physics * 13 * , 035024 ( 2011 ) . c. wang , c. kurtsiefer , h. weinfurter , and b. burchard , `` single photon emission from siv centres in diamond produced by ion implantation , '' journal of physics b : atomic , molecular and optical physics * 39 * , 3741 ( 2006 ) .d. antonov , t. huermann , a. aird , j. roth , h .-trebin , c. mller , l. mcguinness , f. jelezko , t. yamamoto , j. isoya , s. pezzagna , j. meijer , and j. wrachtrup , `` statistical investigations on nitrogen - vacancy center creation , '' applied physics letters * 104 * , 012105 ( 2014 ) .j. meijer , s. pezzagna , t. vogel , b. burchard , h. h. bukow , i. w. rangelow , y. sarov , h. wiggers , i. plmel , f. jelezko , j. wrachtrup , f. schmidt - kaler , w. schnitzler , and k. singer , `` towards the implanting of ions and positioning of nanoparticles with nm spatial resolution , '' applied physics a * 91 * , 567571 ( 2008 ) .k. y. han , k. i. willig , e. rittweger , f. jelezko , c. eggeling , and s. w. hell , `` three - dimensional stimulated emission depletion microscopy of nitrogen - vacancy centers in diamond using continuous - wave light , '' nano letters * 9 * , 33233329 ( 2009 ) .s. pezzagna , d. wildanger , p. mazarov , a. d. wieck , y. sarov , i. rangelow , b. naydenov , f. jelezko , s. w. hell , and j. meijer , `` nanoscale engineering and optical addressing of single spins in diamond , '' small * 6 * , 21172121 ( 2010 ) .s. tamura , g. koike , a. komatsubara , t. teraji , s. onoda , l. p. mcguinness , l. rogers , b. naydenov , e. wu , l. yan , f. jelezko , t. ohshima , j. isoya , t. shinada , and t. tanii , `` array of bright silicon - vacancy centers in diamond fabricated by low - energy focused ion beam implantation , '' applied physics express * 7 * , 115201 ( 2014 ) .j. riedrich - mller , c. arend , c. pauly , f. mcklich , m. fischer , s. gsell , m. schreck , and c. becher , `` deterministic coupling of a single silicon - vacancy color center to a photonic crystal cavity in diamond , '' nano letters * 14 * , 52815287 ( 2014 ) . j. riedrich - mller , s. pezzagna , j. meijer , c. pauly , f. mcklich , m. markham , a. m. edmonds , and c. becher , `` nanoimplantation and purcell enhancement of single nitrogen - vacancy centers in photonic crystal cavities in diamond , '' applied physics letters * 106 * , 221103 ( 2015 ) .t. schroder , e. chen , l. li , m. walsh , m. e. trusheim , i. bayn , and d. englund , `` targeted creation and purcell enhancement of nv centers within photonic crystal cavities in single - crystal diamond , '' in `` conference on lasers and electro - optics 2014 , '' ( osa , 2014 ) , osa technical digest ( online ) , p. fw1b.6 .t. schroder , l. li , e. chen , m. walsh , m. e. trusheim , i. bayn , j. zheng , s. mouradian , h. bakhru , o. gaathon , and d. r. englund , `` deterministic high - yield creation of nitrogen vacancy centers in diamond photonic crystal cavities and photonic elements , '' in `` conference on lasers and electro - optics 2015 , '' ( osa , 2015 ) , osa technical digest ( online ) , p. fth3b.1 .a. m. edmonds , u. f. s. dhaenens - johansson , r. j. cruddace , m. e. newton , k .-m . c. fu , c. santori , r. g. beausoleil , d. j. twitchen , and m. l. markham , `` production of oriented nitrogen - vacancy color centers in synthetic diamond , '' physical review b * 86 * , 035201 ( 2012 ) .l. m. pham , n. bar - gill , c. belthangady , d. le sage , p. cappellaro , m. d. lukin , a. yacoby , and r. l. walsworth , `` enhanced solid - state multispin metrology using dynamical decoupling , '' physical review b * 86 * , 045214 ( 2012 ) .j. michl , t. teraji , s. zaiser , i. jakobi , g. waldherr , f. dolde , p. neumann , m. w. doherty , n. b. manson , j. isoya , and j. wrachtrup , `` perfect alignment and preferential orientation of nitrogen - vacancy centers during chemical vapor deposition diamond growth on ( 111 ) surfaces , '' applied physics letters * 104 * , 102407 ( 2014 ) .m. lesik , j .-tetienne , a. tallaire , j. achard , v. mille , a. gicquel , j .- f .roch , and v. jacques , `` perfect preferential orientation of nitrogen - vacancy defects in a synthetic diamond sample , '' applied physics letters * 104 * , 113107 ( 2014 ) .k. g. lee , x. w. chen , h. eghlidi , p. kukura , r. lettow , a. renn , v. sandoghdar , and s. gtzinger , `` a planar dielectric antenna for directional single - photon emission and near - unity collection efficiency , '' nature photonics * 5 * , 166169 ( 2011 ) .t. schrder , f. gdeke , m. j. banholzer , and o. benson , `` ultrabright and efficient single - photon generation based on nitrogen - vacancy centres in nanodiamonds on a solid immersion lens , '' new journal of physics * 13 * , 055017 ( 2011 ). e. neu , p. appel , m. ganzhorn , j. miguel - snchez , m. lesik , v. mille , v. jacques , a. tallaire , j. achard , and p. maletinsky , `` photonic nano - structures on ( 111)-oriented diamond , '' applied physics letters * 104 * , 153108 ( 2014 ) .s. a. momenzadeh , r. j. sthr , f. f. de oliveira , a. brunner , a. denisenko , s. yang , f. reinhard , and j. wrachtrup , `` nanoengineered diamond waveguide as a robust bright platform for nanomagnetometry using shallow nitrogen vacancy centers , '' nano letters * 15 * , 165169 ( 2015 ) .s. l. mouradian , t. schrder , c. b. poitras , l. li , j. goldstein , e. h. chen , m. walsh , j. cardenas , m. l. markham , d. j. twitchen , m. lipson , and d. englund , `` scalable integration of long - lived quantum memories into a photonic circuit , '' physical review x * 5 * , 031009 ( 2015 ) .b. j. m. hausmann , t. m. babinec , j. t. choy , j. s. hodges , s. hong , i. bulu , a. yacoby , m. d. lukin , and m. lonar , `` single - color centers implanted in diamond nanostructures , '' new journal of physics * 13 * , 045004 ( 2011 ) .c. j. widmann , c. giese , m. wolfer , d. brink , n. heidrich , and c. e. nebel , `` fabrication and characterization of single crystalline diamond nanopillars with nv - centers , '' diamond and related materials * 54 * , 28 ( 2015 ) .p. maletinsky , s. hong , m. s. grinolds , b. hausmann , m. d. lukin , r. l. walsworth , m. lonar , and a. yacoby , `` a robust scanning diamond sensor for nanoscale imaging with single nitrogen - vacancy centres , '' nature nanotechnology * 7 * , 320324 ( 2012 ) .m. yoshita , k. koyama , y. hayamizu , m. baba , and h. akiyama , `` improved high collection efficiency in fluorescence microscopy with a weierstrass - sphere solid immersion lens , '' japanese journal of applied physics * 41 * , l858l860 ( 2002 ) . h. w. choi , e. gu , c. liu , c. griffin , j. m. girkin , i. m. watson , and m. d. dawson , `` fabrication of natural diamond microlenses by plasma etching , '' journal of vacuum science & technology b * 23 * , 130132 ( 2005 ) . l. marseglia , j. p. hadden , a. c. stanley - clarke , j. p. harrison , b. patton , y .- l .d. ho , b. naydenov , f. jelezko , j. meijer , p. r. dolan , j. m. smith , j. g. rarity , and j. l. obrien , `` nanofabricated solid immersion lenses registered to single emitters in diamond , '' applied physics letters * 98 * , 133107 ( 2011 ) .m. jamali , i. gerhardt , m. rezai , k. frenner , h. fedder , and j. wrachtrup , `` microscopic diamond solid - immersion - lenses fabricated around single defect centers by focused ion beam milling , '' review of scientific instruments * 85 * , 123703 ( 2014 ) . c. hepp , t. mller , v. waselowski , j. n. becker , b. pingault , h. sternschulte , d. steinmller - nethl , a. gali , j. r. maze , m. atatre , and c. becher , `` electronic structure of the silicon vacancy color center in diamond , '' physical review letters * 112 * , 036405 ( 2014 ) .p. siyushev , f. kaiser , v. jacques , i. gerhardt , s. bischof , h. fedder , j. dodson , m. markham , d. twitchen , f. jelezko , and j. wrachtrup , `` monolithic diamond optics for single photon detection , '' applied physics letters * 97 * , 241902 ( 2010 ) .d. riedel , d. rohner , m. ganzhorn , t. kaldewey , p. appel , e. neu , r. warburton , and p. maletinsky , `` low - loss broadband antenna for efficient photon collection from a coherent spin in diamond , '' physical review applied * 2 * , 064011 ( 2014 ) .l. li , e. h. chen , j. zheng , s. l. mouradian , f. dolde , t. schrder , s. karaveli , m. l. markham , d. j. twitchen , and d. englund , `` efficient photon collection from a nitrogen vacancy center in a circular bullseye grating , '' nano letters * 15 * , 14931497 ( 2015 ) .a. young , c. y. hu , l. marseglia , j. p. harrison , j. l. obrien , and j. g. rarity , `` cavity enhanced spin measurement of the ground state spin of an nv center in diamond , '' new journal of physics * 11 * , 013007 ( 2009 ) .t. m. babinec , j. t. choy , k. j. m. smith , m. khan , and m. lonar , `` design and focused ion beam fabrication of single crystal diamond nanobeam cavities , '' journal of vacuum science & technology b * 29 * , 010601 ( 2011 ) .a. faraon , z. huang , v. acosta , c. santori , and r. beausoleil , `` coupling of nitrogen - vacancy centers to photonic crystal resonators in monocrystalline diamond , '' in `` conference on lasers and electro - optics 2012 , '' ( osa , 2012 ) , osa technical digest , p. qm3c.5 .b. j. m. hausmann , b. j. shields , q. quan , y. chu , n. p.de leon , r. evans , m. j. burek , a. s. zibrov , m. markham , d. j. twitchen , h. park , m. d. lukin , and m. lonar , `` coupling of nv centers to photonic crystal nanobeams in diamond , '' nano letters * 13 * , 57915796 ( 2013 ) .i. bayn , b. meyler , j. salzman , v. richter , and r. kalish , `` single crystal diamond photonic crystal nanocavity : fabrication and initial characterization , '' in `` conference on lasers and electro - optics 2010 , '' ( osa , 2010 ) , osa technical digest ( online ) , p. qthl7 .b. j. m. hausmann , b. shields , q. quan , p. maletinsky , m. mccutcheon , j. t. choy , t. m. babinec , a. kubanek , a. yacoby , m. d. lukin , and m. lonar , `` integrated diamond networks for quantum nanophotonics , '' nano letters * 12 * , 15781582 ( 2012 ) .b. j. m. hausmann , i. b. bulu , p. b. deotare , m. mccutcheon , v. venkataraman , m. l. markham , d. j. twitchen , and m. lonar , `` integrated high - quality factor optical resonators in diamond , '' nano letters * 13 * , 18981902 ( 2013 ) .z. huang , a. faraon , c. santori , v. acosta , and r. g. beausoleil , `` microring resonator - based diamond optothermal switch : a building block for a quantum computing network , '' in `` proc .spie 8635 , '' , vol .8635 ( spie digital library , 2013 ) , vol .8635 , p. 86350e .p. e. barclay , k .-m . c. fu , c. santori , and r. g. beausoleil , `` chip - based microcavities coupled to nitrogen - vacancy centers in single crystal diamond , '' applied physics letters * 95 * , 1911151911153 ( 2009 ) .m . c. fu , p. e. barclay , c. santori , a. faraon , and r. g. beausoleil , ``low - temperature tapered - fiber probing of diamond nitrogen - vacancy ensembles coupled to gap microcavities , '' new journal of physics * 13 * , 055023 ( 2011 ) . m. trupke , e. a. hinds , s. eriksson , e. a. curtis , z. moktadir , e. kukharenka , and m. kraft , `` microfabricated high - finesse optical cavity with open access and small volume , '' applied physics letters * 87 * , 211106 ( 2005 ). r. albrecht , a. bommer , c. pauly , f. mcklich , a. w. schell , p. engel , t. schrder , o. benson , j. reichel , and c. becher , `` narrow - band single photon emission at room temperature based on a single nitrogen - vacancy center coupled to an all - fiber - cavity , '' applied physics letters * 105 * , 073113 ( 2014 ) .m . c. fu ,c. santori , p. e. barclay , i. aharonovich , s. prawer , n. meyer , a. m. holm , and r. g. beausoleil , `` coupling of nitrogen - vacancy centers in diamond to a gap waveguide , '' applied physics letters * 93 * , 234107 ( 2008 ) .m. gould , s. chakravarthi , i. r. christen , n. thomas , s. dadgostar , y. song , m. l. lee , f. hatami , and k .- m . c. fu ,`` a large - scale gap - on - diamond integrated photonics platform for nv center - based quantum information , '' arxiv:1510.05047 [ cond - mat , physics : physics , physics : quant - ph ] ( 2015 ) .r. n. patel , t. schrder , n. wan , l. li , s. l. mouradian , e. h. chen , and d. r. englund , `` efficient photon coupling from a diamond nitrogen vacancy center by integration with silica fiber , '' light : science & applications * 5 * , e16032 ( 2016 ) .j. p. sprengers , a. gaggero , d. sahin , s. jahanmirinejad , g. frucci , f. mattioli , r. leoni , j. beetz , m. lermer , m. kamp , s. hfling , r. sanjines , and a. fiore , `` waveguide superconducting single - photon detectors for integrated quantum photonic circuits , '' applied physics letters * 99 * , 181110 ( 2011 ) .f. najafi , j. mower , n. c. harris , f. bellei , a. dane , c. lee , x. hu , p. kharel , f. marsili , s. assefa , k. k. berggren , and d. englund , `` on - chip detection of non - classical light by scalable integration of single - photon detectors , '' nature communications * 6 * , 5873 ( 2015 ). h. a. atikian , a. eftekharian , a. j. salim , m. j. burek , j. t. choy , a. h. majedi , and m. lonar , `` superconducting nanowire single photon detector on diamond , '' applied physics letters * 104 * , 122602 ( 2014 ) .p. rath , o. kahl , s. ferrari , f. sproll , g. lewes - malandrakis , d. brink , k. ilin , m. siegel , c. nebel , and w. pernice , `` superconducting single - photon detectors integrated with diamond nanophotonic circuits , '' light : science & applications * 4 * , e338 ( 2015 ) .x. chew , g. zhou , f. s. chau , and j. deng , `` enhanced resonance tuning of photonic crystal nanocavities by integration of optimized near - field multitip nanoprobes , '' journal of nanophotonics * 5 * , 059503 ( 2011 ) .m. s. blok , n. kalb , a. reiserer , t. h. taminiau , and r. hanson , `` towards quantum networks of single spins : analysis of a quantum memory with an optical interface in diamond , '' faraday discussions * 184 * , 173182 ( 2015 ) .f. dolde , v. bergholm , y. wang , i. jakobi , b. naydenov , s. pezzagna , j. meijer , f. jelezko , p. neumann , t. schulte - herbrggen , j. biamonte , and j. wrachtrup , `` high - fidelity spin entanglement using optimal control , '' nature communications * 5 * , 3371 ( 2014 ) .j. cramer , n. kalb , m. a. rol , b. hensen , m. s. blok , m. markham , d. j. twitchen , r. hanson , and t. h. taminiau , `` repeated quantum error correction on a continuously encoded qubit by real - time feedback , '' arxiv:1508.01388 [ cond - mat , physics : quant - ph ] ( 2015 ) .
|
the past decade has seen great advances in developing color centers in diamond for sensing , quantum information processing , and tests of quantum foundations . increasingly , the success of these applications as well as fundamental investigations of light - matter interaction depend on improved control of optical interactions with color centers from better fluorescence collection to efficient and precise coupling with confined single optical modes . wide ranging research efforts have been undertaken to address these demands through advanced nanofabrication of diamond . this review will cover recent advances in diamond nano- and microphotonic structures for efficient light collection , color center to nanocavity coupling , hybrid integration of diamond devices with other material systems , and the wide range of fabrication methods that have enabled these complex photonic diamond systems .
|
[ sec:1 ] in this paper we are interested in swarm modelling which represents the collective behavior of interacting agents of similar size and shape such that insects , birds or aerial vehicles . inside the swarm ,agents communicate with each other , working together to accomplish tasks and reach goals . as an example , in the last few years , the use of unmanned aerial vehicles swarm has been widely developed for numerous applications including monitoring of natural disasters , industrial accidents , surveillance of crowds , sensing in large environments , search and rescue missions , searching for sources of pollution , closed observation of protected areas and many others ( see for instance or ) .main advantages are that the considered swarm can cover quickly a large area only requiring one operator or can scan high - risk sites rapidly whereas large vehicle can not .all of these real - world challenges motivate serious investigations on how to control multiple vehicles cooperating automatically to accomplish a given task . on the other hand ,nature provides great examples of decentralized , coordinated behaviors in groups of living organisms .indeed , it is surprising how swarms of insects or flocks of birds can travel in large , dense groups without colliding ( see and ) . even in the presence of external obstaclesthese agents are able to avoid collisions smoothly and such biological groups are remarkably effective at maintaining optimized group structure , detecting and avoiding obstacles and predators , and performing other complex tasks . observing animals or pedestrians collective motion , remarkable patterns known asemergent behaviors are achieved by following simple rules .such impressive inter - agent coordination is accomplished despite their natural physiological constraints .although individual agents have limited sensing capability and can not see the whole formation , they can form a flock with no apparent leader , which implies the lack of a centralized command .this highly coordinated collective behavior emerges from localized interactions among individuals within the swarm . in this context ,the objective of this paper is to propose a three dimensional model for a swarm of aerial vehicles inspired by coordinated behaviors of such biological groups .the following key points will be taken into account .first , the model will be based on a sequence of simple rules followed by every individual ( microscopic level ) .then , it will include constraints related to limited sensor information .moreover , since many applications occurs in a high density traffic environment , the model will result in safe paths for all individuals . to reach our objective, we consider an interacting particle system for the collective behavior of swarms . in behavioral based methods , all the agents are considered equal and they adopt behaviors built on informations coming from their only neighborhood .the behavior of an agent is usually based on simple rules .thanks to the feedback shared between neighboring agents , these methods are following a decentralized approach making it easily scalable . according to , in high density traffic situations, it is recommended to use a decentralized coordination , even if there is less freedom for maneuver .however , it is usually difficult to predict the group behavior , and the stability of the formation is generally not easy to prove either .these methods are among the first to have been used in motion planning for multi - agent systems as they are easily stated and generally efficiently scalable since their rules are supposed to be implemented independently for each agent .safe paths is related to collision avoidance which plays an important role in the context of managing multiple vehicles .it has been an active area of research in the field of robotics using the collision cone method and the inevitable collision states approach .the collision cone approach can be used to determine whether two objects , of irregular shapes and arbitrary sizes , are on a collision course .it has been the basis for many collision / obstacle avoidance algorithms .these methods are developed with robotic application with knowledge about the obstacles ( position , velocity , and acceleration ) .there have been also some research on aircraft collision avoidance both from the multiple vehicles and the air traffic control points of view .all these collision avoidance procedures are based on three steps : see , detect , and avoid .but most of the algorithms developed for air traffic management are those that guarantee safe trajectories in a very low density traffic involving only two or three aircraft .another approach for collision avoidance is artificial potential based methods where individuals are treated like charged particles of same charge that repel each other ; whereas the destination of an individual is modeled as a charge of the opposite sign so as to attract or navigate it toward the destination .the artificial potential methods are susceptible to local minima and require breaking forces . in this paper ,our goal is first to develop a three dimensional dynamical approach describing the motions of individual and interacting particles .the model is inspired from the ones developed in and for pedestrians collective motion in 2d but here we are concerned with 3d motion of aerial vehicles or birds which leads to an enhanced but more complex dynamics . based on the vision based approach , we propose a model decomposed in two phases for collision avoidance including both particle to particle and moving obstacles avoidance .when dealing with large populations , in both cases one faces the well - known problem of the curse of dimensionality , term first coined by bellman precisely in the context of dynamic optimization : the complexity of numerical computations of the solutions of the above problems blows up as the size of the population increases .a possible way out is the so - called mean - field approach , where the individual influence of the entire population on the dynamics of a single agent is replaced by an averaged one .this substitution principle results in a unique mean - field equation and allows the computation of solutions , cutting loose from the dimensionality .therefore , we perform a mean field limit of the microscopic model to replace self - interactions between particles by self - consistent fields .the mean field approximation corresponds to the case where the force itself depends on some average of the distribution function . as a consequence ,binary interactions between particles are not described but instead their global effect on each particle is taken into account . this approximation is justified especially in the configuration where the swarm is very closed to the target and therefore identifying binary interaction is very complex . as a result , we obtain a space - inhomogeneous kinetic pdes .the remainder of the paper is organized as follows . in section [ sec:2 ] , we present the individual agent based model proposed for self - propelled particle swarms including collision avoidance . in section [ sec:3 ] ,the associated mean - field limit is formally derived and analysed .section [ sec:4 ] is devoted to numerical experiments of the microscopic model .we conclude with final remarks and future works in section [ sec:5 ] .[ sec:2 ] we are interested in modeling the motion of individuals ( vehicles , birds , .. ) with the objective to drive each individual of the swarm to a target point without colliding with any moving obstacles or other individuals .since we consider a swarm we do not explicitly constrain the relative location of each individual .this section is devoted to the presentation of the microscopic model considering particles with position and velocity , with .then , we derive a three - dimensional interacting particle system . the agent - based model we consider is inspired from the one proposed in , and developed for crowd dynamics . in these references , the heuristic - based model proposes that pedestrians follow a heuristic rule composed of two phases : 1 . a perception phase ; 2 . a decision - making phase . in the perception phase, the subjects make an assessment of the dangerousness of the possible encounters in all the possible directions of motion . in the decision - making phase, they turn towards the direction which minimizes the distance walked towards their target while avoiding encounters with other pedestrians . here, we mainly follow the same assumptions to describe the perception phase , but then the individual changes its velocity in order to minimize the probability of collision . in this section, we discuss the perception phase .we consider a particle located at a position , with a velocity , interacting with a collision partner located at a position , with a velocity .the sketch of the binary encounter between these two particles is depicted in figure [ fig:1 ] .we assume that is the time where particle evaluates the likeliness of a collision with particle .this evaluation is made by supposing that each one maintains its velocity , ( respectively ) constant . as depicted in figure [ fig:1 ] ,we introduce two notable points and that we define just below .[ def : int - points ] the interaction points ( resp . ) of particle ( resp . ) in their interaction is the point on the -th particle s trajectory ( resp . on the -th particle s trajectory ) such that is minimal , _i.e _ .. [ scale=0.7 ] ( xi ) at ( -10,0 ) ; ( vi ) at ( -7,0 ) ; ( xibar ) at ( -3,0 ) ; ( xiend ) at ( 4,0 ) ; ( xj ) at ( 2,5 ) ; ( vj ) at ( 1.,2.5 ) ; ( xjbar ) at ( -0.5,-1.25 ) ; ( xjend ) at ( -2,-5 ) ; ( rend ) at ( -3,3.5 ) ; ( xi) ( xiend ) ; ( xj) ( xjend ) ; ( xi) ( vi ) ; ( xj) ( vj ) ; ( a ) at ( -10,-0.3 ) ; ( b ) at ( -3,-0.3 ) ; ( a) ( b ) ; ( dibar ) at ( -5.,-0.3 ) ; ( dibar ) node[below ] ; ( a ) at ( -2.8,-0.1 ) ; ( b ) at ( -0.65,-1.05 ) ; ( a) ( b ) ; ( xi ) node[above] node ; ( vi ) node[above] ; ( xibar ) node[above left] node ; ( xj ) node[below right ] node ; ( vj ) node[below right] ; ( xjbar ) node[left] node ; ( xibar ) circle ( 3.5 ) ; ( dij ) at ( -2.,-0.5 ) ; ( dij ) node[below ] ; ( r ) at ( -2.1,2.5 ) ; ( r ) node[left] ; ( xibar) ( rend ) ; [ def : dmin - tti - dti ] the interaction between particle and particle leads to define three key quantities associated to perception phase : * the minimal distance represents the smallest distance which separates the two particles and supposing that they cruise on a straight line at constant velocities and . from definition [ def : int - points ] , the minimal distance is then the distance between the interaction points such that * the time - to - interaction is the time needed by the subject to reach the interaction point from his current position at time , which is counted positive if this time belongs to the future of the subject and negative if it belongs to the past .then , is the value of for which the quantity is minimal . *the distance - to - interaction is the distance which separates the subject s current position to the interaction point .the distance - to - interaction is counted positive if the interaction point is reached in the future and negative if the interaction point was crossed in the past : where denotes the sign of .notice that the quantities and are symmetric with respect to and . here, we have supposed that each individual has a perfect knowledge of its own and partner s positions and velocities , and we assume that they are able to estimate or to compute the distance - to - interaction , the minimal distance and the time to interaction with perfect accuracy from the knowledge of and .let us now compute , and assuming that a particle with a phase space position can detect an interaction s partner located in its perception region with a position and velocity .we follow the same strategy as for two dimensional pedestrian flow and denoting by and the positions of the two particles at time , we define the distance between the two particles at time by therefore , for each particle and its interaction partner , we have the following result .[ prop:1 ] the value of the time to interaction for the particle , is whereas the distance to interaction of particle and the minimal distance are given by on the one hand , the value of the time to interaction for the particle , is obtained minimizing the quadratic function of time ( [ distance-2particles ] ) such that hence it gives then , the distance to interaction of particle is given by the distance traveled by this particle during the time to interaction , _i.e _ where is given by definition [ timetointeraction ] .this leads to on the other hand , the minimal distance is given by the minimal value of ( [ distance-2particles ] ) , _ i.e. _ , which leads to the objective of the perception phase is to describe the configuration corresponding to a potential collision of the particle with the surrounding particles . from the definitions of the minimal distance and time - to - interaction, we consider that a collision may occur between particle and particle when the following conditions are satisfied . *first , we need that means that we observe in the future . * second , if we define a safety zone for the particle delimited by the circle of radius as depicted in figure [ fig:1 ] then collision will occur if . combining these two conditions mean that in the future , the trajectories of each particle will encounter inside the safety zone . therefore, we define the set of particles which may interact with a particle located at at time , as however , some restrictions related to the perception sensitivity of the individual ( vision , sensors , etc ) has also to be taken into account . as a consequence , considering a test particle interacting with another particle , we restrict the set of potential partner collision to those belonging to the `` vision cone '' of particle denoted .this region is represented for instance as the blue area in figure [ fig:2 ] and model the set of positions for the particle that are seen by the particle .let us now define the `` vision cone '' precisely . [ visioncone ] introducing a threshold number $ ] , the `` vision cone '' for the particle is the cone centered at with angle about the direction . to summarize the perception phase , for each particle we define the set of interaction s partners as the set so we now detail the decision making phase in order to model collision avoidance .first let us emphasize that the three dimensional swarm modeling is quite different from the two dimensional case encountered in collision avoidance for pedestrians or robots .indeed , in the three dimensional case , particles can not suddenly stop or brake ! herewe consider the motion of a particle with position and velocity , which interacts with a particle located at .depending on the position of the interaction points , the collision avoidance procedure leads to consider three configurations : * safe configuration ( illustrated in figure [ fig:2]-(a ) ) , where the particle does not change its direction and continues its cruse ; * blind configuration ( illustrated in figure [ fig:2]-(b ) ) , where a collision is likely , but particle does not see , hence it continues its cruse ; * unsafe configuration ( illustrated on figure [ fig:2]-(c ) ) , where the particle has detected an interaction s partner and both of them modify their direction .ccc \(o ) at ( 0,0 ) ; ( x ) at ( -4,-2 ) ; ( y ) at ( 8,0 ) ; ( z ) at ( 0,8 ) ; ( o) ( x ) ; ( o) ( y ) ; ( o) ( z ) ; ( o ) node[below] ; ( x ) node[above] ; ( y ) node[below] ; ( z ) node[above] ; ( xi ) at ( 2,3 ) ; ( xi ) node[below] node ; ( vi ) at ( 6,3 ) ; ( xi) ( vi ) ; ( xj ) at ( 1,5 ) ; ( xj ) node[above] node ; ( vj ) at ( -1,2 ) ; ( xj) ( vj ) ; ( xi ) + ( 45:8 cm ) arc(45:-45:8 cm ) cycle ; & \(o ) at ( 0,0 ) ; ( x ) at ( -4,-2 ) ; ( y ) at ( 8,0 ) ; ( z ) at ( 0,8 ) ; ( o) ( x ) ; ( o) ( y ) ; ( o) ( z ) ; ( o ) node[below] ; ( x ) node[above] ; ( y ) node[below] ; ( z ) node[above] ; ( xi ) at ( 2,3 ) ; ( xi ) node[above] node ; ( vi ) at ( 4,3 ) ; ( xi) ( vi ) ; ( xj ) at ( -3,1 ) ; ( xj ) node[below right ] node ; ( vj ) at ( 1,1 ) ; ( xj) ( vj ) ; ( xi ) + ( 45:8 cm )arc(45:-45:8 cm ) cycle ; & \(o ) at ( 0,0 ) ; ( x ) at ( -4,-2 ) ; ( y ) at ( 8,0 ) ; ( z ) at ( 0,8 ) ; ( o) ( x ) ; ( o) ( y ) ; ( o) ( z ) ; ( o ) node[below] ; ( x ) node[above] ; ( y ) node[below] ; ( z ) node[above] ; ( xi ) at ( 1,-1 ) ; ( xi ) node[below] node ; ( vi ) at ( 4,2 ) ; ( xi) ( vi ) ; ( xj ) at ( 3,6 ) ; ( xj ) node[below right ] node ; ( vj ) at ( 3,4 ) ; ( xj) ( vj ) ; ( xi ) + ( 90:8 cm ) arc(90:0:8 cm ) cycle ; + ( a ) & ( b ) & ( c ) to describe more precisely this turning process , we introduce the local frame of the particle centred at position , and denoted by with , the azimuthal angle and the polar angle .hence we have the collision avoidance model proposed below is based on the situation where a particle interacts with another one and will modify its direction but preserve its speed . to determine this turning rate and the rotation axis , we need to define some indicators on occurrence of collisions .the first indicator of the dangerousness of the collision is the time , which indicates the remaining time before a collision occurs .the second indicator measured by particle , is the time derivative of the relative bearing angle or azimuthal angle and the relative polar angle formed in its own frame between the direction and the position of particle . to define rigorously these two angles and their time derivative we need to consider the frame of the particle at position with velocity .then we can define the relative bearing angle and the relative polar angle .consider the local frame centered in at of the particle , and denote by its collision partner located at .we define * the relative bearing or azimuthal angle is the azimuthal angle of point in the frame centered at ; * the relative polar angle is the polar angle of point in the frame centered at .[ def : bp ] \(o ) at ( 0,0,0 ) ; ( 0,0,0 ) ( 1,0,0 ) node[anchor = north east] ; ( 0,0,0 ) ( 0,1,0 ) node[anchor = north west] ; ( 0,0,0 ) ( 0,0,1 ) node[anchor = south] ; ( o ) node[left] node ; ( p ) node[left] node ; ( o ) ( p ) ; ( o ) ( pxy ) ; ( p ) ( pxy ) ; ( , 0,0 ) arc ( 0:90 : ) ; ( , 0,0 ) arc ( 0:90 : ) ; we also introduce the unit vector of the line connecting the two particles and the distance between the agents .these quantities are defined by the following relations : now let us compute the time derivative of and which will be a key indicator in the collision avoidance process .assume that particles are at time at positions and , and move with constant velocity and .then where [ lmm:1 ] by the definition of the relative bearing angle and the relative polar angle , we can write : taking the time derivative of this relation and using the fact that is constant since the motion of the particle is supposed rectilinear with constant speed , it leads to \\ & + & \,\sin\beta_{ij}\,\dot\alpha_{ij } \ , \,\left [ \,-\sin\alpha_{ij}\,{{\mathbf e}}_{\rho_i } \,+\ , \cos\alpha_{ij}\,{{\mathbf e}}_{\phi_i}\ , \right],\end{aligned}\ ] ] where we recognize the expression of the two unit vectors constructed by writing the point in spherical coordinate in the frame of particle , that is , hence we have on the other hand , taking the time derivative of the first equation ( [ eq : defk ] ) , and after some easy computations , we find , \\ & = & \frac{1}{d_{ij } } \,\left [ \,\langle { { \mathbf v}}_{j}-{{\mathbf v}}_i , { { \mathbf e}}_{\alpha_{ij } } \rangle \ , { { \mathbf e}}_{\alpha_{ij } } \,+\ , \langle { { \mathbf v}}_{j}-{{\mathbf v}}_i\,,\ , { { \mathbf e}}_{\beta_{ij } } \rangle \ , { { \mathbf e}}_{\beta_{ij}}\,\right ] . \end{aligned}\ ] ] identifying these two relations , we get which gives rise to formula ( [ eq : bearing ] ) for the derivative of the relative bearing and polar angles . the proposed control scheme is based on gyroscopic forces but adapted to the constraints due to the perception region . on the one hand we consider the situation where the two particles see each other , then they cooperate to avoid to collide ( cooperative interaction represented in figure [ fig:4]-(a ) ) .on the other hand , we describe the interaction of one particle with an obstacle or another particle which do not deviate from its trajectory ( non - cooperative interaction represented in figure [ fig:4]-(b ) ) .cc \(o ) at ( 0,0 ) ; ( x ) at ( -4,-2 ) ; ( y ) at ( 8,0 ) ; ( z ) at ( 0,8 ) ; ( o) ( x ) ; ( o) ( y ) ; ( o) ( z ) ; ( o ) node[left] ; ( x ) node[above] ; ( y ) node[below] ; ( z ) node[above] ; ( xi ) at ( -1,-2 ) ; ( xi ) node[below] node ; ( vi ) at ( 1,0 ) ; ( xi) ( vi ) ; ( xj ) at ( 4,7 ) ; ( xj ) node[below right ] node ; ( vj ) at ( 4,5 ) ; ( xj) ( vj ) ; ( xi ) + ( 80:11 cm ) arc(80:10:11 cm ) cycle ; ( xj ) + ( -55:11 cm ) arc(-55:-125:11 cm ) cycle ; ( xibar ) at ( 3,2 ) ; ( xiend ) at ( 8,7 ) ; ( xjbar ) at ( 4,1.5 ) ; ( xjend ) at ( 4,-3 ) ; ( xi) ( xiend ) ; ( xj) ( xjend ) ; ( xibar ) node[above left] node ; ( xjbar ) node[right] node ; ( xibar ) circle ( 2 ) ; ( xibar) ( xjbar ) ; & \(o ) at ( 0,0 ) ; ( x ) at ( -4,-2 ) ; ( y ) at ( 8,0 ) ; ( z ) at ( 0,8 ) ; ( o) ( x ) ; ( o) ( y ) ; ( o) ( z ) ; ( o ) node[left] ; ( x ) node[above] ; ( y ) node[below] ; ( z ) node[above] ; ( xi ) at ( -1,-2 ) ; ( xi ) node[below] node ; ( vi ) at ( 1,0 ) ; ( xi) ( vi ) ; ( xj ) at ( 2.5,-1 ) ; ( xj ) node[below right ] node ; ( vj ) at ( 2.5,1 . ) ; ( xj) ( vj ) ; ( xi ) + ( 80:11 cm ) arc(80:10:11 cm ) cycle ; ( xj ) + ( 55:11 cm ) arc(55:125:11 cm ) cycle ; ( xibar ) at ( 3.5,2.5 ) ; ( xiend ) at ( 8,7 ) ; ( xjbar ) at ( 2.5,3 ) ; ( xjend ) at ( 2.5,8 ) ; ( xi) ( xiend ) ; ( xj) ( xjend ) ; ( xibar ) node[right] node ; ( xjbar ) node[left] node ; ( xibar ) circle ( 2 ) ; ( xibar) ( xjbar ) ; + ( a ) & ( b ) assume that at time , both particles are such that . then the two particles will rotate in order to avoid to collide along a rotation axis defined by a vector field which has to be determined such that where defines the rotation frequency . in the present situation the two interaction points and are relatively close and the two particles need to rotate in order to avoid to collide .hence both particles and will rotate along the axis defined by the vector in order to increase the minimal distance given in ( [ distance-2particles ] ) such that to increase the minimal distance we need to decrease the quantity , that is to increase the magnitude of the time derivative of the relative bearing and polar angle given in lemma [ lmm:1 ] .the time derivative of these angles indicate that the collision is very likely when it is small .thus , we write the vector in the basis as and determine the values of in order to increase the magnitude of . in the next lemma , we determine the relationship between the rotation axis which tends to increase the time derivative of the bearing and polar angles and therefore , decreases the likeliness of the collision .we follow the same strategy as for two dimensional problems .assume that two particles are such that and consider the time derivative of the relative bearing and polar angles given in ( [ eq : bearing ] ) and the rotational axis given by ( [ rij ] ) is such that , then , is solution to the following system with as a consequence of the non - negativity of , we also have [ lmm:2 ] let us consider the expression of given by ( [ eq : bearing ] ). then we compute the time derivative of both quantities and now we observe that , \\ \ , \\ \dot{{\mathbf e}}_{\beta_{ij } } \,=\ , \dot\alpha_{ij } \ , \cos\beta_{ij } \ , { { \mathbf e}}_{\alpha_{ij } } \,-\ , \dot\beta_{ij}\,{{\mathbf k}}_{ij } , \end{array}\right.\ ] ] hence using the definition of the unit vector in ( [ eq : defk ] ) and the definition of in ( [ timetointeraction ] ) , it yields for the time derivative of the relative polar angle , then for the time derivative of , therefore , from the definition of in ( [ rij ] ) and using that , we get it gives using ( [ eq : bearing ] ) , the following system of equations from the assumption ( [ toto2 ] ) , we get the non - negativity of the last coefficients . therefore, multiplying the first equation of ( [ confit2 ] ) by and the second one by , it gives that hence the result follows .applying lemma [ lmm:2 ] , we observe that we can choose orthogonal to the unit vector since this direction does not have any effect on the variation of .thus , the simplest choice is for any frequency , we easily verify that hence , it gives and for we have with consider at time two particles such that but .then only the particle will rotate in order to avoid collision along a rotation axis defined by a vector field which has to be determined such that therefore we apply the same strategy as the one presented below to determine the condition for which the time derivative of the polar angle and will increase .hence we prove the following result .assume that two particles are such that and and consider the time derivative of the relative bearing and polar angles given in ( [ eq : bearing ] ) and the rotational axis given by ( [ rij ] ) is such that , then , is solution to the following system with as a consequence of the non - negativity of , we also have [ lmm:3 ] we proceed as in the proof of lemma [ lmm:2 ] , hence we get and for the time derivative of , furthermore , from the expression of and in ( [ def : e ] ) and choosing , we get that it gives the following system of equations from the assumption ( [ toto3 ] ) , we get the non - negativity of the last coefficients . therefore, multiplying the first equation of ( [ confit3 ] ) by and the second one by , it gives that we choose such that and hence , we can apply lemma [ lmm:3 ] and the particle will deviate from whereas will continue its free motion .finally , taking into account all the interactions between particles at time , the force field applied for collision avoidance is given by the sum of interactions as with a rotational axis given by whereas the frequency is chosen as with the function corresponds to either cooperative or non - cooperative actions as explained above , note that in the particular case where is colinear to , the vector .therefore , in that case we choose it as using the same strategy as the one described below , obstacles are treated as particles , where the particle interacts with the closest point belonging to the intersection of the obstacle and the vision cone of the particle at time , whereas is the given velocity of the obstacle .then the collision avoidance follows the same process as before except that the obstacle does not deviate .on the other hand , a force is applied to steer particle to its destination .the potential is the distance function where represents the location of the target , whereas a friction term is added to control the speed of the particle .hence the particle is directed by the sum of the gradient of the potential field and the friction force in the following manner where represents the friction coefficient .obviously , the motion of particles is not fully deterministic .when some decisions need to be made in front of several alternatives , the response of the subjects is subject - dependent .the simplest way to model this inherent uncertainty consists in adding a brownian motion in velocity where is the noise intensity and where are standard white noises in 3d , which are independent from one particle to another one .the circle means that the stochastic differential equation must be understood in the stratonovich sense .the integration of this stochastic differential equation generates a brownian motion .this stochastic term adds up to the previous ones .finally from the requirements defined in the perception and decision making phases , we get the following model constructed from the force field and , where are given in ( [ model:1])-([model:2 ] ) . note that in the two dimensional case , the interactions occur in the horizontal plane and the rotation axis is parallel to , hence we recover the model proposed for pedestrian in .consider the solution to the agent - based model ( [ eq : dynamics ] ) without noise ( ) .then the energy given by satisfies the following estimate simply multiply the second equation of ( [ eq : dynamics ] ) by and integrate by part . by orthogonality property, we get the energy estimate .we now introduce a statistical description of the system . instead of using the exact positions , velocities of particles , we rather describe the system in terms of the probability distribution .specifically , is the probability of finding particles in a small physical volume about point , within a velocity neighborhood of velocity at time .if the force term is due to purely external causes or smoothly depends on the distribution function , it can be shown that satisfies the following kinetic equation : where corresponds to the interaction term in ( [ eq : dynamics ] ) and is given by where is the function characterizing the kind of interaction ( cooperative or non - cooperative ) and is given by whereas represents the intersection of the dangerous zone and the vision cone with given by and the function and corresponds to finally is given by [ sec:4 ] in this section we present simulations to show the effectiveness of the collision avoidance procedure proposed in this paper .we choose a smooth external potential such that and the friction coefficient is fixed to . furthermore to emphasize the effect of the collision avoidance process we neglect the noise and set in our simulations .we first consider the simple situation where all particles move in a direction parallel to the horizontal plane .initially , all the particles are located in a circle and want to move on the opposite direction . therefore in this very specific situation , the collision point of all particles is the center of the circle .we consider the microscopic model without any noise and choose .for the vision cone given in definition [ visioncone ] we take whereas the axis of rotation and the turning frequency are given in ( [ model:0])-([model:2 ] ) . since the motion occurs in the horizontal plane, we expect the axis of rotation to be colinear to the unit vector . in figure[ fig : test0 - 1 ] , we present the numerical results with two , theree , four and nine particles and observe that the present model preserves perfectly the symmetry . furthermore , due to the perception phase , the collision is anticipated which seems to guarantee a smooth trajectory and not a brutal change of direction . [ cols="^,^ " , ][ sec:5 ] in this article , we have proposed a three dimensional dynamical model for collision avoidance based on previous works in two dimension for pedestrian flows .this individual based model relies on a vision - based framework : the particles analyze the scene and react to the collision threatening partners by changing their direction of motion .we have also proposed a kinetic version of this individual based model and perform some numerical experiments which illustrate the ability of the microscopic model to avoid collisions in three dimension . in a future work, the approach developed in section [ sec:3 ] , which is based on a mean field model , will be investigated to study the collision avoidance process in the presence of many vehicles .indeed for a large number of particles , sensors are not able to distinguish each individual but only clouds of particles are detected , the application of mean field models may contribute on the design of efficient algorithms since the sum of interacting particles is replaced by a self consistent force .on the other hand , more precise models can be applied to describe the motion in three dimension of vehicles as multi - agent dynamics where each agent is described by its position and body attitude .more precisely , each agent travels in a given direction and its frame can rotate around it adopting different configurations . in this manner , the frame attitude is described by three orthonormal axes giving rotation matrices .
|
this paper presents a three dimensional collision avoidance approach for aerial vehicles inspired by coordinated behaviors in biological groups . the proposed strategy aims to enable a group of vehicles to converge to a common destination point avoiding collisions with each other and with moving obstacles in their environment . the interaction rules lead the agents to adapt their velocity vectors through a modification of the relative bearing angle and the relative elevation . moreover the model satisfies the limited field of view constraints resulting from individual perception sensitivity . from the proposed individual based model , a mean - field kinetic model is derived . simulations are performed to show the effectiveness of the proposed model . keywords . collision avoidance , individual - based models .
|
signal processing plays a central role of a truly enormous range of the astrophysical problems .these incluse , for example , rotational periodicity , behavior of stellar magnetic ativity ( flares , spots , faculae and plages ) , oscilations and noise .much of traditional signal processing has relied upon a relatively small class of problems , for example , stationary and cyclostationary signal characterized by a form of translational invariance .it is not suprising that the fourier transform ( ft ) is considered the key tool in the analysis and manipulation of these problems .but , there are many signals whose defining characteristic is their invariance not to translation but rather to , i.e. , the process exhibit a dependence on different time scales .in addition , while the ft plays a central role in the analysis and manipulation of both statiscally and deterministically translation - invariant signal , the _ wavelet transform _ ( wt ) plays an analogous role for the kinds of scale - invariant signal . wavelet analysis is becoming a common tool for analyzing localized variations of power within a time series . by decomposing a time series into time frequency plane , it is possible to determine both the dominant modes of variability and how those modes vary in time ( see torrence & compo 1998 for futher details) .contrary to classical fourier analysis that decomposes a signal into different sines and cosines which are not bounded in time , the wavelet transform uses functions characterized by scale ( period ) and position in time .the wavelet transform has been used for numerous studies in astrophysics , including signal noise periodicity and decomposition as well as the signature of differential rotation in stellar light curves . in the present work ,we apply the morlet wavelet with an adjustable parameter ( see section 2 ) , which can be fine - tuned to produce optimal resolutions of time and frequency , where for large values of give better time resolution .the use of the morlet wavelet allow the best trade off between time and frequency resolution , as the gaussian function is its own fourier transform ( oliver et al .in addition , we apply the haar wavelet for decomposition at levels of light curves .we use the wavelab package ( library of matlab routines for wavelet analysis ) forthe decomposition and a modified version of colorado package for the wavelet maps of synthetic and observed light curve .we also apply the wavelet analysis of signals with gaps and of unevenly spaced data ( frick et al .many wavelet families can be proposed , depending on the nature of problem .the most commonly used in astrophysical applications is the morlet wavelet , that can be defined as being the generalization of the windowed fourier transform .the wavelet transform uses a window whose width is a function of the frequency .several types of wavelets can be used . however ,if the signal is sinusoidal , the wavelet should also be chosen to be sinusoidal .on the other hand , the morlet wavelet ( grossmann & morlet 1984) represents a sinusoidal oscillation contained within a gaussian envelope .then the wavelet transform can be written as ^{2}}e^{-i2\pi \nu(t-\tau)},\ ] ] represents a sinusoidal oscillation contained within a gaussian package .then the wavelet transform is given by ,\ ] ] where is the complex conjugate of and is the signal .where an element of the discrete wavelet transform is ^{2}}e^{-i2\pi \nu(t_{j}-\tau)}.\ ] ] the controls the resolution in both frequency and time ( baudin et al .1994) : and therefore , we have the following uncertainty relationship : the relative frequency resolution is uniquely determined by , whereas the time resolution depends on the frequency itself to hold the number of oscillations inside the wavelet constant .we choose = 0.005 in our analysis in order to balance the time and frequency resolution .30days ( diferential rotational period ) , 158days , 1year ( seasonal period ) and 1.3 years ( 474.5 days ) periodicities .the maximum variance corresponds to 1.3 years.,scaledwidth=85.0% ] .this effect can be caused by dispersion of solar magnetic field on the distance.,scaledwidth=85.0% ] = 6000 k , log = 4.5 ( cgs units ) , a rotation period of 3 days and a facular behaviour of type f. the period of 1.5 days corresponds to a spot 180 shifted longitude.,scaledwidth=85.0% ]figures [ fig1 ] , [ fig2 ] and [ fig3 ] show the morlet wavelet amplitude maps , respectively corresponding to oscillations in the photospheric magnetic field of the sun ( nso / kitt peak data ) , the daily averages of the magnetic field strength b versus time measured by voyager 1 ( v1 ) during 1978 , and synthetic light curve produced by a. f. lanza .the nso / kitt peak data , as well as voyager 1 and synthetic light curve data , are up to date the best proxies for the stellar light curves that will be obtained by corot space mission .several time series will be used as example of wavelet analysis .these series include the nso / kitt peak data used to measure the 1.3year ( rotation of the sun near the base of its convection zone ) and 158day ( high energy solar flares related to a periodic emergence of magnetic flux that appears near the maxima of some solar cycles ) periodicities as well as rotational period of 30 days ( see fig .[ fig1 ] ) .we have used a wavelet analysis ( hempelmann & donahue 1997 ; hempelmann 2002 ; lanza et al .2003) to recover information on the solar rotation rate from its _ stellar - like _ light curve .we also apply the wavelet analysis in sunspot data , reproducing results found in the literature ( oliver et al . 1998 ; krivova & solanki 2002) .we also analyze synthetic light curve produced by a _ theoretical simulator _ , developed by the group of stellar astronomy at natal , obtaining well defined periodicities compared with the real values .we confirm the results obtained by analysis based on the lomb - scargle periodogram and by the phase dispersion minimization method ( pdm ) for period search using the peranso program .but , in contrast with these methods , the wavelet procedure makes possible a global and local analysis of the periodicities . the period of rotational modulation changes during the solar cycle because the variability of : first , the latitude of the activity belts and the mean lifetime of the surface features .the rotational modulation signal can be masked by the active region evolution due to its variability .the use of wavelet applied in a signal , demonstrated to be a powerful tool for analysis of signals with non stationary features . on the other hand, we can through the wavelet method establish clearly the presence of a persistent signal .but , through the decomposition at level ( or frequency) we can obtain other periodicities that are not visible when we treat simultaneously all frequencies .this work was been supported by continuous grants from cnpq brazilian agency , by a pronex grant of the fapern rio grande do norte agency .0 c. torrence and g. p. compo , _ bull .* 79 * ( 1998 ) 61 .r. oliver , j. l. ballester and f. baudin , _ nature _ * 394 * ( 1998 ) 552 .p. frick , a. grossmann and p. tchamitchian , _ journal of mathematical physics _ * 39 * ( 1998 ) 4091 .a. grossman and j. morlet , _siam j. math . anal_. * 15 * ( 1984 ) 723 .f. baudin , a. gabriel and d. gibert , _ a&a _ * 285 * ( 1994 ) 29 .a. hempelmann and r. a. donahue ,_ a&a _ * 322 * ( 1997 ) 835 .hasler , g. rudiger and j. staude , _ an _ * 2 * ( 2002 ) 123 .a. hempelmann , _ a&a _ 388 ( 2002 ) 540 .a. f. lanza , m. rodono , i. pagano , p. barge and a. llebaria , _a&a _ * 403 * ( 2003 ) 1135 .n. a. krivova and s. k. solanki , 2002 ,_ a&a _ * 394 * ( 2002 ) 701 .k. g. strassmeier and k. olah , in _ second eddington workshop : stellar structure and habitable planet finding _, eds . f. favata , s. aigrain and a. wilson ( italy , palermo , 2003 ) , p. 149 . j. polygiannakis , p. preka papadema and x. moussas , _ mnras _ * 343 * ( 2003 ) 725 .
|
the wavelet transform has been used for numerous studies in astrophysics , including signal noise periodicity and decomposition as well as the signature of differential rotation in stellar light curves . in the present work , we apply the morlet wavelet with an adjustable parameter , which can be fine - tuned to produce optimal resolutions of time and frequency , and the haar wavelet for decomposition at levels of light curves . we use the wavelab package ( library of matlab routines for wavelet analysis ) for the decomposition and a modified version of colorado package for the wavelet maps of synthetic and observed light curve . from different applications , including virgo / soho , nso / kitt peak , voyager 1 and sunspot data and synthetic light curve produced by different simulators , we show that this technique is a solid procedure to extract the stellar rotation period and possible variations due to active regions evolution . in this paper we show the morlet wavelet amplitude maps , respectively corresponding to oscillations in the photospheric magnetic field of the sun ( nso / kitt peak data ) , the daily averages of the magnetic field strength b versus time measured by voyager 1 ( v1 ) during 1978 , and synthetic light curve produced by a. f. lanza . we can also identify the noise level , as well as the contribution for the light curves produced by intensity , variability and mean lifetime of spots . thus , we can identify clearly the temporal evolution of the rotation period in relation to other periodicity phenomena affecting stellar light curves . in this context , because the wavelet technique is a powerful tool to solve , in particular , not trivial cases of light curves , we are confident that such a procedure will play an important role on the corot data analysis .
|
a first approach in modeling transmission dynamics of infectious diseases , and more particularly in estimating age - dependent transmission rates , was described by .the idea is to impose different mixing patterns on the so - called waifw - matrix , hereby constraining the number of distinct elements for identifiability reasons , and to estimate the parameters from serological data .many authors have elaborated on this approach of , among which , and .however , estimates of important epidemiological parameters such as the basic reproduction number turn out to be sensitive with respect to the choice of the imposed mixing pattern .an alternative method was proposed by , where contact rates are modeled as a continuous contact surface and estimated from serological data .clearly , both methods involve a somewhat ad hoc choice , namely the structure for the waifw - matrix and the parametric model for the contact surface . alternatively , to estimate age - dependent transmission parameters , augmented seroprevalence data with auxiliary data on self - reported numbers of conversational contacts per person , whilst assuming that transmission rates are proportional to rates of conversational contact .the social contact surveys conducted as part of the polymod project , allow us to elaborate on this methodology presented by .the paper is organized as follows . in the next section ,we outline the buildup of the belgian social contact survey and the information available for each contact .further , we briefly explain the epidemiological characteristics of vzv and the serological data from belgium we use . in section 3, we illustrate the traditional approach of imposing mixing patterns to estimate the waifw - matrix from this serological data set . in section 4, a transition is made to the novel approach of using social contact data to estimate .we show that a bivariate smoothing approach allows for a more flexible and better estimate of the contact surface compared to the maximum likelihood estimation method of .further , some refinements are proposed , among which an elicitation of contacts with high transmission potential and a non - parametric bootstrap approach , assessing sampling variability and accounting for age uncertainty , as suggested by .our main result is the novel method of disentangling the waifw - matrix into two components : the contact surface and an age - dependent proportionality factor .the proposed method , as described in section 5 , tackles two dimensions of uncertainty .first , by estimating the contact surface from data on social contacts , we overcome the problem of choosing a completely parametric model for the waifw - matrix .second , to overcome the problem of model selection for the age - dependent proportionality factor , concepts of multi - model inference are applied and a model averaged estimate for is calculated .some concluding remarks are provided in the last section .several small scale surveys were made in order to gain more insight in social mixing behavior relevant to the spread of close contact infections . in order to refine on contact information ,a large multi - country population - based survey was conducted in europe as part of the polymod project . in belgium , this survey was conducted in a period from march until may 2006 . a total of 750 participants , selected through random digit dialing , completed a diary - based questionnaire about their social contacts during one randomly assigned weekday and one randomly assigned day in the weekend ( not always in that order ) . in this paper, we follow the sampling scheme of the polymod project and only consider one day for each participant . the data set consists of participant - related information such as age and gender , and details about each contact : age and gender of the contacted person , and location , duration and frequency of the contact . in casethe exact age of the contacted person was unknown , participants had to provide an estimated age range and the mean value is used as a surrogate .further , a distinction between two types of contacts was made : non - close contacts , defined as two - way conversations of at least three words in each others proximity , and close contacts that involve any sort of physical skin - to - skin touching .teenagers ( 9 - 17y ) filled in a simplified version of the diary and were closely followed up to anticipate interpretation problems . for children ( ) , a parent or exceptionally another adult caregiver filled in the diary .one adult respondent made over 1000 contacts and was considered an outlier to the data set .this person is likely very influential and therefore excluded from the analyses presented here .analyses are based on the remaining 749 participants . using census data on population sizes of different age by household size combinations, weights are given to the participants in order to make the data representative of the belgian population . in total , the 749 participants recorded 12775 contacts of which 3 are omitted from analysis due to missing age values for the contacted person . for a more in depth perspective on the belgian contact survey and the importance of contact rates on modeling infectious diseases , we refer to . primary infection with vzv , also known as human herpes virus 3 ( hhv-3 ), results in varicella , commonly known as chickenpox , and mainly occurs in childhood .afterwards , the virus becomes dormant in the body and may reactivate in a later stage , resulting in herpes zoster , commonly known as shingles .infection with vzv occurs through direct or aerosol contact with infected persons .a person infected with chickenpox is able to transmit the virus for about 7 days .following and , we ignore chickenpox cases resulting from contact with persons suffering from shingles .zoster indeed has a limited impact on transmission dynamics when considering large populations with no immunization program . in a period from november 2001 until march 2003 , 2655serum samples in belgium were collected and tested for vzv . together with the test results , gender and age of the individuals were recorded . in the data set ,age ranges from 0 to 40 years and 6 individuals are younger than 6 months .belgium has no mass vaccination program for vzv .further details on the data set can be found in and .to describe transmission dynamics , a compartmental msir - model for a closed population of size is considered . by doing so , we explicitly take into account the fact that , in a first phase , newborns are protected by maternal antibodies and do not take part in the transmission process .we assume that mortality due to infection can be ignored , which is plausible for vzv in developed countries , and that infected individuals maintain lifelong immunity after recovery .further , demographic and endemic equilibrium are assumed , which means that the age - specific population sizes remain constant over time and that the disease is in an endemic steady state at the population level . for simplicity , we assume type i mortality defined as where denotes the age - specific mortality rate .this implies that everyone survives up to age and then promptly dies , which is a reasonable assumption when describing transmission dynamics for vzv in belgium ( see also * ? ? ?we make a similar assumption for the age - specific rate of losing maternal antibodies , which we will denote as ` type i maternal antibodies ' : meaning that all newborns are protected by maternal antibodies until a certain age and then move to the susceptible class instantaneously . under these assumptions ,the proportion of susceptibles is given by where denotes the age - specific force of infection , and if .if the mean duration of infectiousness is short compared to the timescale on which transmission and mortality rate vary , the force of infection can be approximated by : where denotes the transmission rate i.e. the per capita rate at which an individual of age makes an effective contact with a person of age , per year .formula ( [ foi ] ) reflects the so - called ` mass action principle ' , which implicitly assumes that infectious and susceptible individuals mix completely with each other and move randomly within the population . estimating transmission rates using seroprevalence datacan not be done analytically since the integral equation ( [ foi ] ) in general has no closed form solution .however , it is possible to solve this numerically by turning to a discrete age framework , assuming a constant force of infection in each age - class .denote the first age interval },a_{[2]}) ] , , where }=a ] . making use of ( [ eq : susc ] ) ,the prevalence of immune individuals of age is now well approximated by : }-a_{[k ] } ) - \lambda_{j}(a - a_{[j ] } ) \right),\ ] ] if belongs to the age interval .note that we allow the prevalence of immune individuals to vary continuously with age and that we do not summarize the binary seroprevalence outcomes into a proportion per age class .further , the force of infection for age class equals ( ) : }-a_{[k ] } ) \right)-\exp\left(-\sum_{k=1}^{j}\lambda_k(a_{[k+1]}-a_{[k]})\right ) \right ] , \label{foid}\ ] ] where denotes the per capita rate at which an individual of age class makes an effective contact with a person of age class , per year .the transmission rates make up a matrix , the so - called waifw - matrix .once the waifw - matrix is estimated , following and , the basic reproduction number can be calculated as the dominant eigenvalue of the next generation matrix with elements ( ) : }-a_{[i ] } \right ) \beta_{ij}.\end{aligned}\ ] ] represents the number of secondary cases produced by a typical infected person during his or her entire period of infectiousness , when introduced into an entirely susceptible population with the exception of newborns who are passively immune through maternal antibodies . in the next section , we illustrate the traditional approach of imposing mixing patterns to estimate the waifw - matrix from seroprevalence data .the traditional approach of imposes different , somewhat ad hoc , mixing patterns on the waifw - matrix .note that , in the previous section , we ended up with a system of equations with unknown parameters ( [ foid ] ) and thus restrictions on these patterns are necessary .among the proposals in the literature , one distinguishes between several mixing assumptions such as homogeneous mixing ( ) , proportional mixing ( ) , separable mixing ( ) and symmetry ( ) .note that the latter two mixing assumptions require additional restrictions to be made .as illustrated by and , the structure imposed on the waifw - matrix has a high impact on the estimate of . in this section ,we assume the transmission rates to be constant within six discrete age classes ( ) .we follow and consider the following mixing patterns , based on prior knowledge of social mixing behavior , to model the waifw - matrix for vzv : in order to estimate the transmission parameters from seroprevalence data , we follow an iterative procedure from and .first , one assumes plausible starting values for and solves ( [ foid ] ) iteratively for the piecewise constant force of infection , which in its turn can be contrasted to the serology .second , this procedure is repeated under the constraint , until the bernoulli loglikelihood +(1-y_i)\log[1-\pi(a_i)]\bigr\},\ ] ] has been maximized . here, denotes the size of the serological data set , denotes a binary variable indicating whether subject had experienced infection before age and the prevalence is obtained from ( [ eq : pidisc ] ) . for the remainder of the paper ,the following parameters , specific for belgium anno 2003 , are kept constant when estimating the waifw - matrix and : size of the population aged 0 to 80 years , , and life expectancy at birth , .the mean duration of infectiousness for vzv is taken .type i mortality and type i maternal antibodies with age , are assumed . removing individuals younger than 6 months , the size of the serological data set becomes . in this application ,the population is divided into six age classes taking into account the schooling system in belgium , following : , , , , , .the last age class has a wide range because the serological data set only contains information for individuals up till 40 years .the following ml - estimate for is obtained assuming a piecewise constant force of infection and using constrained optimization to ensure monotonicity ( ) : .a graphical display of the fit is presented in figure [ fig : pcfit ] and a dashed line is used to indicate the estimated prevalence and force of infection for the age interval which lacks serological information . , which lacks serological information.,width=226 ] during the estimation process , non - identifiability problems occur for mixing patterns , and , which is related to the fact that .therefore , these mixing patterns are left from further consideration . for the remaining three , ml -estimates for and are presented in table [ table : waifw ] .note that mixing pattern has a regular configuration for the data , whereas and are non - regular since unconstrained ml - estimation induces negative estimates for .the estimate of ranges from 3.37 to 4.21 . a 95% bootstrap - based percentile confidence interval for presented as well , applying a non - parametric bootstrap by taking samples with replacement from the serological data .the fit of the three mixing patterns can be compared using model selection criteria , such as aic and bic . as can be seen from table [ table : waifw ] ,the aic - values ( equivalent to bic here ) are virtually equal and do not provide any basis to guide the choice of a mixing pattern .note that these results differ somewhat from those obtained by , where a different data set for vzv serology was used , collected from a large laboratory in the city of antwerp between october 1999 and april 2000 .in the previous section , we have illustrated some caveats involved in the traditional approach of imposing mixing patterns on the waifw - matrix .in general , the choice of the structures as well as the choice of the age classes are somewhat ad hoc .since evidence for mixing patterns is thought to be found in social contact data , i.e. governing contacts with high transmission potential , an alternative approach to estimate transmission parameters has emerged : augmenting seroprevalence data with data on social contacts . in , it was argued that is proportional to , the per capita rate at which an individual of age makes contact with a person of age , per year : we will refer to this assumption as the ` constant proportionality ' assumption , since represents a constant disease - specific factor . translating this assumption into the discrete framework with age classes },a_{[2 ] } ) , [ a_{[2]},a_{[3 ] } ) , \ldots , [ a_{[j]},a_{[j+1]}) ] .again , there is a large upper limit induced by the same issues reported in section [ sec : continuous ] . in order to assess the lack - of - data - problem, we simulate serological data for the age range using a constant prevalence , which is estimated from a thin plate regression spline model for the original serological data .sample sizes for one - year age groups are chosen according to the belgian population distribution in 2003 and the total size of serological data now amounts to .the seven candidate models for the proportionality factor are now applied to the ` complete ' serological data set .the results are presented in table [ tab : simulation ] and are , overall , quite similar to the results obtained before ( table [ tab : candidate ] ) .the 95% percentile confidence intervals for ( bootstrap samples converged out of 700 ) , however , are narrower since the simulated data for the age range are ` forcing ' the proportionality factor to follow a natural pace .this is illustrated for model in figure [ fig : m7 ] , where the estimated function is depicted for 100 randomly chosen bootstrap samples .particularly , right confidence interval limits for are smaller , whereas for most models the estimate seems to have decreased just a little bit .model selection uncertainty is illustrated quite nicely here , since four models , , , and , have akaike weights close to 0.24 and these models also had the most support for the original data set ( table [ tab : candidate ] ) .the model averaged estimate now equals 5.64 and the 95% bootstrap - based percentile confidence interval is ].,width=510 ] when estimating , we were actually faced with three problems of indeterminacy .first , there is lack of serological information for individuals aged 40 and older , second , prevalence of vzv rapidly stagnates , leading to an indeterminate force of infection and third , serological surveys do not provide information related to infectiousness .models which only expressed age differences in for infectious individuals , such as the discrete model ( section [ sec : discrete ] ) and the continuous models and ( section [ sec : continuous ] ) , either did not lead to convergence or induced unrealistically large bootstrap estimates for at older ages .a sensitivity analysis in section [ sec : sens ] showed that lack of serological data had a big impact on confidence intervals for .we simulated data for the age range , giving rise to a model averaged estimate as displayed in figure [ fig : interval ] with corresponding confidence interval limits $ ] .the latter problems of indeterminacy might be controlled by combining information on the same infection over different countries or on different airborne infections , assuming there is a relation between the country- or disease - specific , respectively .this strategy already appeared beneficial when estimating directly from seroprevalence data , without using social contact data .further , the impact of intervention strategies such as school closures , might be investigated by incorporating transmission parameters , estimated from data on social contacts and serological status , in an age - time - dynamical setting .finally , it is important to note that the models rely on the assumptions of type i mortality and type i maternal antibodies in order to facilitate calculations .consequently , model improvements could be made through a more realistic approach of demographical dynamics .this study has been made and funded as part of `` simid '' , a strategic basic research project funded by the institute for the promotion of innovation by science and technology in flanders ( iwt ) , project number 060081 . this work has been partly funded and benefited from discussions held in polymod , a european commission project funded within the sixth framework programme , contract number : ssp22-ct-2004 - 502084 .we also gratefully acknowledge support from the iap research network nr p6/03 of the belgian government ( belgian science policy ) .beutels , p. , z. shkedy , m. aerts , and p. van damme ( 2006 ) .social mixing patterns for transmission models of close contact infections : exploring self - evaluation and diary - based data collection through a web - based interface . _ 134_ , 11581166 .diekmann , o. , j. a. p. heesterbeek , and j. a. j. metz ( 1990 ) . on the definition and the computation of the basic reproduction ratio in models for infectious diseases in heterogeneous populations ._ 28 _ , 65382 .edmunds , w. j. , c. j. ocallaghan , and d. j. nokes ( 1997 ) . who mixes with whom ?a method to determine the contact patterns of adults that may lead to the spread of airborne infections ._ 264 _ , 949957 .hens , n. , n. goeyvaerts , m. aerts , z. shkedy , p. van damme , and p. beutels ( 2009a ) . mining social mixing patterns for infectious disease models based on a two - day population survey in belgium ._ 9 _ , 5 .melegaro , a. , m. jit , n. gay , e. zagheni , and w. j. edmunds ( 2009 ) .what types of contacts are important for the spread of infections ? using contact survey data to explore european mixing patterns .technical report , health protection agency , centre for infections , london , uk . mossong , j. , n. hens , v. friederichs , i. davidkin , m. broman , b. litwinska , j. siennicka , v. p. trzcinska , a. , p. beutels , a. vyse , z. shkedy , m. aerts , m. massari , and g. gabutti ( 2008a ) .parvovirus b19 infection in five european countries : seroepidemiology , force of infection and maternal risk of infection ._ 136 _ , 10591068 .nardone , a. , f. de ory , m. carton , d. cohen , p. van damme , i. davidkin , m. rota , et al .the comparative sero - epidemiology of varicella zoster virus in 11 countries in the european region ._ 25 _ , 78667872 .ogunjimi , b. , n. hens , n. goeyvaerts , m. aerts , p. van damme , and p. beutels ( 2009 ) . using empirical social contact data to model person to person infectious disease transmission : an illustration for varicella ._ 218 _ , 80 - 87 .van effelterre , t. , z. shkedy , m. aerts , g. molenberghs , p. van damme , and p. beutels ( 2009 ) .contact patterns and their implied basic reproductive numbers : an illustration for varicella - zoster virus . _ 137_ , 4857 .
|
in dynamic models of infectious disease transmission , typically various mixing patterns are imposed on the so - called who - acquires - infection - from - whom matrix ( waifw ) . these imposed mixing patterns are based on prior knowledge of age - related social mixing behavior rather than observations . alternatively , one can assume that transmission rates for infections transmitted predominantly through non - sexual social contacts , are proportional to rates of conversational contact which can be estimated from a contact survey . in general , however , contacts reported in social contact surveys are proxies of those events by which transmission may occur and there may exist age - specific characteristics related to susceptibility and infectiousness which are not captured by the contact rates . therefore , in this paper , transmission is modeled as the product of two age - specific variables : the age - specific contact rate and an age - specific proportionality factor , which entails an improvement of fit for the seroprevalence of the varicella - zoster virus ( vzv ) in belgium . furthermore , we address the impact on the estimation of the basic reproduction number , using non - parametric bootstrapping to account for different sources of variability and using multi - model inference to deal with model selection uncertainty . the proposed method makes it possible to obtain important information on transmission dynamics that can not be inferred from approaches traditionally applied hitherto . _ keywords _ : basic reproduction number , bootstrap procedure , model selection and averaging , social contact data , transmission parameters , waifw . * estimating infectious disease parameters from data on social contacts and serological status * + * nele goeyvaerts , niel hens , benson ogunjimi , marc aerts , ziv shkedy , pierre van damme , philippe beutels * + institute for biostatistics and statistical bioinformatics , hasselt university , agoralaan 1 , b3590 diepenbeek , belgium + for health economics research and modeling infectious diseases ( chermid ) & centre for the evaluation of vaccination ( cev ) , vaccine & infectious disease institute , university of antwerp , belgium
|
bayesian inference for the stochastic volatility model has been extensively studied . in this paper , we focus on comparisons with the method of kastner and fruwirth - schnatter ( 2014 ) .this state - of - the - art method combines the method of kim et al .( 1998 ) with the asis ( ancillary sufficiency interweaving strategy ) technique of yu and meng ( 2011 ) .kastner and fruwirth - schnatter s method consists of two parts .the first is an update of the latent variables and the second is a joint update of and the latent variables .we improve this method here by saving and re - using sufficient statistics to do multiple parameter updates at little additional computational cost .the non - linear relationship between the latent and the observation process prohibits the direct use of kalman filters for sampling the latent variables .kim et al .( 1998 ) introduced an approximation to the stochastic volatility model that allows using kalman filters to draw samples of which can be later reweighed to give samples from their exact posterior distribution .this approximation proceeds as follows .first , the observation process for the stochastic volatility model is written in the form where has a distribution .next , the distribution of is approximated by a ten - component mixture of gaussians with mixture weights , means and variances .the values of these mixture weights , means and variances can be found in omori ( 2007 ) . at each step of the sampler , at each time , a single component of the mixture is chosen to approximate the distribution of by drawing a mixture component indicator with probabilities proportional to conditional on , the observation process is now linear and gaussian , as is the latent process : kalman filtering followed by a backward sampling pass can now be used to sample a latent sequence . for a description of this sampling procedure ,see petris et al .( 2009 ) .the mis - specification of the model due to the approximation can be corrected using importance weights where is the density , is the density and the index refers to a draw .posterior expectations of functions of can then be computed as , with the draws .we note that , when doing any other updates affecting in combination with this approximate scheme , we need to continue to use the same mixture of gaussians approximation to the observation process . if an update drawing from an approximate distribution is combined with an update drawing from an exact distribution , neither update will draw samples from their target distribution , since neither update has a chance to reach equilibrium before the other update disturbs things. we would then be unable to compute the correct importance weights to estimate posterior expectations of functions of .asis methods ( yu and meng ( 2011 ) ) are based on the idea of interweaving two parametrizations . for the stochastic volatility model ,these are the so - called non - centered ( nc ) and centered ( c ) parametrizations .the nc parametrization is the one in which the stochastic volatility model was originally presented above .the c parametrization for the stochastic volatility model is the mixture of gaussians approximation for c is the same as for nc .kastner and fruwirth - schnatter ( 2014 ) propose two new sampling schemes , gis - c and gis - nc , in which they interweave these two parametrizations , using either the nc or c parameterization as the baseline .the authors report a negiligible performance difference between using nc or c as the baseline . for the purposes of our comparisons ,we use the method with nc as the baseline , gis - nc , which proceeds as follows . 1 .draw given using the linear gaussian approximation update ( nc ) 2 . draw given using a metropolis update ( nc ) 3 .move to c by setting 4 .draw given using a metropolis update ( c ) 5 .move back to nc by setting 6 .redraw the mixture component indicators given .theorem 4 of yu and meng ( 2011 ) establishes a link between asis and the px - da ( parameter expansion - data augmentation ) method of liu and wu ( 1999 ) . in the case of the stochastic volatility model, this means that we can view the asis scheme for updating and as a combination of two updates , both done in the nc parametrization .the first of these draws new values for conditional on .the second draws new values for both and , such that when we propose to update to and to , we also propose to update the sequence to . for this second update, the metropolis acceptance probability needs to be multiplied by a jacobian factor to account for scaling .a joint translation update for and has been previously considered by liu and sabatti ( 2000 ) and successfully applied to to stochastic volatility model .scale updates are considered by liu and sabatti ( 2000 ) as well , though they do not apply them to the stochastic volatility model .the view of asis updates as joint updates to and makes it easier to see why asis updates improve efficiency . at first glance, they look like they only update the parameters , but they actually end up proposing to change both and in a way that preserves dependence between them .this means that moves proposed in asis updates are more likely to end up in a region of high posterior density , and so be accepted .kastner and fruwirth - schnatter ( 2014 ) do a single metropolis update of the parameters for every update of the latent sequence .however , we note that given values for the mixture indices , and , low - dimensional sufficient statistics exist for all parameters in the centered parametrization . in the non - centered parametrization , given , and , low - dimensional sufficient statistics exist for .we propose doing multiple metropolis updates given saved values of these sufficient statistics ( for all parameters in the case of c and for in the case of nc ) .this allows us to reach equilibrium given a fixed latent sequence at little computational cost since additional updates have small cost , not dependent on .also , this eliminates the need to construct complex proposal schemes , since with these repeated samples the algorithm becomes less sensitive to the particular choice of proposal density .the sufficient statistics in the case of nc are with the log - likelihood of as a function of the sufficient statistics being in the case of c the sufficient statistics are with the log - likelihood as a function of the sufficient statistics being the details of the derivations are given in the appendix .the general framework underlying ensemble mcmc methods was introduced by neal ( 2010 ) . an ensemble mcmc method using embedded hmms for parameter inference in non - linear , non - gaussian state space models was introduced by shestopaloff and neal ( 2013 ) .we briefly review ensemble methods for non - linear , non - gaussian state space models here .ensemble mcmc builds on the idea of mcmc using a temporary mapping .suppose we are interested in sampling from a distribution with density on .we can do this by constructing a markov chain with transition kernel with invariant distribution .the temporary mapping strategy takes to be a composition of three stochastic mappings .the first mapping , , takes to an element of some other space .the second , , updates to .the last , , takes back to some .the idea behind this strategy is that doing updates in an intermediate space may allow us to make larger changes to , as opposed to doing updates directly in . in the ensemble method , the space is taken to be the -fold cartesian product of .first , mapped to an ensemble , with the current value assigned to , with chosen uniformly at random .the remaining elements for are chosen from their conditional distribution under an ensemble base measure , given that .the marginal density of an ensemble element in the ensemble base measure is denoted by .next , is updated to using any update that leaves invariant the _ ensemble density _ finally , a new value is chosen by selecting an element from the ensemble with probabilities proportional to .the benefit of doing , say metropolis , updates in the space of ensembles is that a proposed move is more likely to be accepted , since for the ensemble density to be large it is enough that the proposed ensemble contains at least some elements with high density under . in shestopaloff and neal ( 2013 ) , we consider an ensemble over latent state sequences .specifically , the current state , , consisting of the latent states and the parameters is mapped to an ensemble where the ensemble contains all distinct sequences passing through a collection of pool states chosen at each time . the ensemble is then updated to using a metropolis update that leaves invariant . at this step , only is changed .we then map back to a new , where is now potentially different from the original .we show that this method considerably improves sampling efficiency in the ricker model of population dynamics . as in the original neal ( 2010 ) paper, we emphasize here that applications of ensemble methods are worth investigating when the density at each of the elements of an ensemble can be computed in less time than it takes to do separate density evaluations .for the stochastic volatility model , this is possible for ensembles over latent state sequences , and over the parameters and . in this paper, we will only consider joint ensembles over and over .since we will use in the mcmc state , we will refer to ensembles over below .we propose two ensemble mcmc sampling schemes for the stochastic volatility model .the first , ens1 , updates the latent sequence , , and by mapping to an ensemble composed of latent sequences and values of , then immediately mapping back to new values of and . the second ,ens2 , maps to an ensemble of latent state sequences and values of , like ens1 , then updates using an ensemble density summing over and , and finally maps back to new values of and . for both ens1 and ens2, we first create a pool of values with elements , and at each time , , a pool of values for the latent state , with elements . the current value of assigned to the pool element } ] .( since the pool states are drawn independently , we do nt need to randomly assign an index to the current and the current s in their pools . )the remaining pool elements are drawn independently from some distribution having positive probability for all possible values of and , say for and for .the total number of ensemble elements that we can construct using the pools over and over is .naively evaluating the ensemble density presents an enormous computational burden for , taking time on the order of . by using the forward algorithm , together with a `` caching '' technique, we can evaluate the ensemble density much more efficiently , in time on the order of .the forward algorithm is used to efficiently evaluate the densities for the ensemble over the .the caching technique is used to efficiently evaluate the densities for the ensemble over , which gives us a substantial constant factor speed - up in terms of computation time . in detail, we do the following .let be the initial state distribution , the transition density for the latent process and the observation probabilities .we begin by computing and storing the initial latent state probabilities which do not depend on for each pool state } ] in the pool and each pool state } ] and } ] in the pool }(x_{i } | \eta^{[l ] } ) & = & \frac{p(y_{i}|x_{i } , \eta^{[l]})}{\kappa_{i}(x_{i})}\sum_{k=1}^{l_{x } } p(x_{i } | x_{i-1}^{[k ] } ) \alpha_{i-1}^{[l]}(x_{i-1}^{[k ] } | \eta^{[l ] } ) , \quad i = 1 , \ldots , n\end{aligned}\ ] ] with } , \ldots , x_{i}^{[l_{x}]}\} ] by } = \sum_{k=1}^{l_{x}}\alpha_{i}^{[l]}(x_{i } | \eta^{[l]}) ] values and using the normalized } ] , since we are guaranteed to have a log - likelihood that is not for the current value of in the mcmc state . for each } ] .even with caching , computing the forward probabilities for each } ] in the pool , the computation of the ensemble densities } ] from the marginal ensemble distribution , with probabilities proportional to } ] , we sample a latent sequence conditional on } ] .then , given the sampled value of , we sample from the pool at time with probabilities proportional to }(x_{i-1 } | \eta^{[l]}) ] and all latent sequences in the ensemble , }$ ] .this approximates updating using the posterior density of with and integrated out , when the number of pool states is large .the update nevertheless leaves the correct distribution exactly invariant , even if the number of pool states is not large .a good choice for the pool distribution is crucial for the efficient performance of the ensemble mcmc method . for a pool distribution for ,a good candidate is the stationary distribution of in the ar(1 ) latent process , which is .the question here is how to choose . for ens1 , which does not change , we can simply use the current value of from the mcmc state , call it and draw pool states from for some scaling factor .typically , we would choose in order to ensure that for different values of , we produce pool states that cover the region where has high probability density .we can not use this pool selection scheme for ens2 because the reverse transition after a change in would use different pool states , undermining the proof via reversibility that the ensemble transitions leave the posterior distribution invariant .however , we can choose pool states that depend on both the current and the proposed values of , say and , in a symmetric fashion .for example , we can propose a value , and draw the pool states from where is the average of and .the validity of this scheme can be seen by considering to be an additional variable in the model ; proposing to update to can then be viewed as proposing to swap and within the mcmc state .we choose pool states for by sampling them from the model prior .alternative schemes are possible , but we do not consider them here .for example , it is possible to draw local pool states for which stay close to the current value of by running a markov chain with some desired stationary distribution steps forwards and steps backwards , starting at the current value of .for details , see neal ( 2003 ) . in our earlier work ( shestopaloff and neal ( 2013 ) ) , one recommendation we made was to consider pool states that depend on the observed data at a given point , constructing a `` pseudo - posterior '' for using data observed at time or in a small neighbourhood around . for the ensembleupdates ens1 and ens2 presented here , we can not use this approach , as we would then need to make the pool states also depend on the current values of and , the latter of which is affected by the update .we could switch to the centered parametrization to avoid this problem , but that would prevent us from making a fast variable .the goal of our computational experiments is to determine how well the introduced variants of the ensemble method compare to our improved version of the kastner and fruwirth - schnatter ( 2014 ) method .we are also interested in understanding when using a full ensemble update is helpful or not .we use a series simulated from the stochastic volatility model with parameters with .a plot of the data is presented in figure [ fig : data ] .0.48 0.48 we use the following priors for the model parameters . \\ \sigma^{2 } & \sim & \textnormal{inverse - gamma}(2.5 , 0.075)\end{aligned}\ ] ] we use the parametrization in which the inverse - gamma has probability density for the and quantiles of this distribution are approximately + . in the mcmc state , we transform and to with the priors transformed correspondingly .we compare three sampling schemes the kastner and fruwirth - schnatter ( kf ) method , and our two ensemble schemes , ens1 , in which we map to an ensemble of and values and immediately map back , and ens2 , in which we additionally update with an ensemble update before mapping back .we combine the ensemble scheme with the computationally cheap asis metropolis updates .it is sensible to add cheap updates to a sampling scheme if they are available .note that the asis ( or translation and scale ) updates we use in this paper are generally applicable to location - scale models and are not restricted by the linear and gaussian assumption .pilot runs showed that updates appears to be the point at which we start to get diminishing returns from using more metropolis updates ( given the sufficient statistics ) in the kf scheme .this is the number of metropolis updates we use with the ensemble schemes as well .the kf scheme updates the state as follows : 1 .update , using the kalman filter - based update , using the current mixture indicators .2 . update the parameters using the mixture approximation to the observation density .this step consists of metropolis updates to given the sufficient statistics for nc , followed by one joint update of and .3 . change to the c parametrization .4 . update all three parameters simultaneously using metropolis updates , given the sufficient statistics for c. note that this update does not depend on the observation density and is therefore exact .update the mixture indicators .the ens1 scheme proceeds as follows : 1 .map to an ensemble of and .2 . map back to a new value of and .3 . do steps 2 ) - 4 ) as for kf , but with the exact observation density .the ens2 scheme proceeds as follows : 1 .map to an ensemble of and .2 . update using an ensemble metropolis update .3 . map back to a new value of and .4 . do steps 2 ) - 4 ) as for kf , but with the exact observation density .the metropolis updates use a normal proposal density centered at the current parameter values .proposal standard deviations for the metropolis updates in nc were set to estimated marginal posterior standard deviations , and to half of that in c. this is because in c , we update all three parameters at once , whereas in nc we update and jointly and separately .the marginal posterior standard deviations were estimated using a pilot run of the ens2 method .the tuning settings for the metropolis updates are presented in table [ table : metstat ] . for ensemble updates of , we also use a normal proposal density centered at the current value of , with a proposal standard deviation of , which is double the estimated marginal posterior standard deviation of .the pool states over are selected from the stationary distribution of the ar(1 ) latent process , with standard deviation for the ens1 scheme and for the ens2 scheme .we used the prior density of to select pool states for . & & & & for ( nc ) & for ( nc ) & & & & for + kf & & & & & 0.12 & & & & + ens1 & & & & & & & & & + ens2 & & & & & & & & & + for each method , we started the samplers from randomly chosen points .parameters were initalized to their prior means ( which were for , for and for ) , and each , was initialized independently to a value randomly drawn from the stationary distribution of the ar(1 ) latent process , given set to the prior mean . for the kf updates , the mixture indicators where all initialized to s ,this corresponds to the mixture component whose median matches the median of the distribution most closely .all methods were run for approximately the same amount of computational time . before comparing the performance of the methods, we verified that the methods give the same answer up to expected variation by looking at the confidence intervals each produced for the posterior means of the parameters .these confidence intervals were obtained from the standard error of the average posterior mean estimate over the five runs .the kf estimates were adjusted using the importance weights that compensate for the use of the approximate observation distribution .no significant disagreement between the answers from the different methods was apparent .we then evaluated the performance of each method using estimates of autocorrelation time , which measures how many mcmc draws are needed to obtain the equivalent of one independent draw .to estimate autocorrelation time , we first estimated autocovariances for each of the five runs , discarding the first of the run as burn - in , and plugging in the overall mean of the five runs into the autocovariance estimates .( this allows us to detect if the different runs for each method are exploring different regions of the parameter / latent variable space ) .we then averaged the resulting autocovariance estimates and used this average to get autocorrelation estimates .finally , autocorrelation time was estimated as , with chosen to be the point beyond which the become approximately .all autocovariances were estimated using the fast fourier transform for computational efficiency .the results are presented in tables and .the timings for each sampler represent an average over iteratons ( each iteration consisting of the entire sequence of updates ) , with the samplers started from a point taken after the sampler converged to equilibrium .the program was written in matlab and run on a linux system with an intel xeon x5680 3.33 ghz cpu . for a fair comparison , we multiply estimated autocorrelation times by the time it takes to do one iteration and compare these estimates .& & & & & & & & & + & 1 & 195000 & 0.11 & 2.6 & 99 & 160 & 0.29 & 11 & 18 + & 10 & 180000 & 0.12 & 2.7 & 95 & 150 & 0.32 & 11 & 18 + & 30 & 155000 & 0.14 & 2.6 & 81 & 130 & 0.36 & 11 & 18 + & 50 & 140000 & 0.16 & 2.3 & 91 & 140 & 0.37 & 15 & 22 + & 1 & 155000 & 0.14 & 2.4 & 35 & 71 & 0.34 & 4.9 & 9.9 + & 10 & 135000 & 0.16 & 2.2 & 18 & 26 & 0.35 & 2.9 & 4.2 + & 30 & 110000 & 0.20 & 2.3 & 19 & 26 & 0.46 & 3.8 & 5.2 + & 50 & 65000 & 0.33 & 1.9 & 16 & 24 & 0.63 & 5.3 & 7.9 + & 1 & 115000 & 0.19 & 1.9 & 34 & 68 & 0.36 & 6.5 & 13 + & 10 & 95000 & 0.23 & 1.9 & 11 & 17 & 0.44 & 2.5 & 3.9 + & 30 & 55000 & 0.38 & 2.2 & 8.9 & 12 & 0.84 & 3.4 & 4.6 + & 50 & 55000 & 0.39 & 1.9 & 11 & 14 & 0.74 & 4.3 & 5.5 + & 1 & 85000 & 0.25 & 2.2 & 33 & 67 & 0.55 & 8.3 & 17 + & 10 & 60000 & 0.38 & 1.9 & 8.3 & 11 & 0.72 & 3.2 & 4.2 + & 30 & 50000 & 0.42 & 1.8 & 8.4 & 11 & 0.76 & 3.5 & 4.6 + & 50 & 45000 & 0.48 & 1.9 & 9.1 & 12 & 0.91 & 4.4 & 5.8 + & & & & & & & & & & + & 1 & 0.32 & 110000 & 0.20 & 2.5 & 100 & 170 & 0.5 & 20 & 34 + & 10 & 0.32 & 95000 & 0.23 & 2.4 & 91 & 140 & 0.55 & 21 & 32 + & 30 & 0.32 & 80000 & 0.27 & 2.5 & 97 & 150 & 0.68 & 26 & 41 + & 50 & 0.32 & 70000 & 0.30 & 2.7 & 90 & 140 & 0.81 & 27 & 42 + & 1 & 0.33 & 80000 & 0.26 & 2.3 & 34 & 68 & 0.6 & 8.8 & 18 + & 10 & 0.33 & 70000 & 0.31 & 2.3 & 18 & 26 & 0.71 & 5.6 & 8.1 + & 30 & 0.33 & 55000 & 0.39 & 2 & 18 & 27 & 0.78 & 7 & 11 + & 50 & 0.34 & 35000 & 0.61 & 2.4 & 12 & 19 & 1.5 & 7.3 & 12 + & 1 & 0.34 & 60000 & 0.36 & 1.7 & 33 & 69 & 0.61 & 12 & 25 + & 10 & 0.35 & 50000 & 0.44 & 2.1 & 10 & 15 & 0.92 & 4.4 & 6.6 + & 30 & 0.34 & 30000 & 0.71 & 1.8 & 10 & 15 & 1.3 & 7.1 & 11 + & 50 & 0.34 & 25000 & 0.81 & 1.8 & 12 & 17 & 1.5 & 9.7 & 14 + & 1 & 0.34 & 45000 & 0.49 & 2.2 & 29 & 61 & 1.1 & 14 & 30 + & 10 & 0.35 & 30000 & 0.72 & 1.6 & 7.3 & 11 & 1.2 & 5.3 & 7.9 + & 30 & 0.36 & 25000 & 0.86 & 1.6 & 7.3 & 9.3 & 1.4 & 6.3 & 8 + & 50 & 0.36 & 25000 & 0.96 & 1.8 & 5.9 & 7.8 & 1.7 & 5.7 & 7.5 + we ran the kf method for iterations , with estimated autocorrelation times using the original ( unweighed ) sequence for of , which after adjusting by computation time of seconds per iteration are .it follows that the ens1 method with set to and set to is better than the kf method by a factor of about for the parameter . for ens2 , the same settings and appears to give the best results , with ens2 worse by a factor of about than ens1 for sampling .we also see that the ens1 and ens2 methods are nt too sensitive to the particular tuning parameters , so long at there is a sufficient number of ensemble elements both for and for .the results show that using a small ensemble ( or so pool states ) over is particularly helpful .one reason for this improvement is the ability to use the caching technique to make these updates computationally cheap .a more basic reason is that updates of consider the entire collection of latent sequences , which allows us to make large changes to , compared to the metropolis updates .even though the ens2 method in this case is outperformed by the ens1 method , we have only applied it to one data set and there is much room for further tuning and improvement of the methods .a possible explanation for the lack of substantial performance gain with the ensemble method is that conditional on a single sequence , the distribution of has standard deviation comparable to its marginal standard deviation , which means that we ca nt move too much further with an ensemble update than we do with our metropolis updates .an indication of this comes from the acceptance rate for ensemble updates of in ens2 , which we can see is nt improved by much as more pool states are added .parameter estimates for the best performing kf , ens1 and ens2 settings are presented in table [ table : est ] .these estimates were obtained by averaging samples from all runs with of the sample discarded as burn - in .we see that the differences between the standard errors are in approximate agreement with the differences in autocorrelation times for the different methods ..estimates of posterior means , with standard errors of posterior means shown in brackets . [ cols="^,^,^,^",options="header " , ]we found that noticeable performance gains can be obtained by using ensemble mcmc based sampling methods for the stochastic volatility model . it may be possible to obtain even larger gains on different data sets , and with even better tuning . in particular , it is possible that the method of updating with an ensemble , or some variation of it , actually performs better than a single sequence method in some other instance .the method of kastner and fruwirth - schnatter ( 2014 ) relies on the assumption that the state process is linear and gaussian , which enables efficient state sequence sampling using kalman filters. the method would not be applicable if this was not the case .however , the ensemble method could still be applied to this case as well .it would be of interest to investigate the performance of ensemble methods for stochastic volatility models with different noise structures for the latent process .it would also be interesting to compare the performance of the ensemble mcmc method with the pmcmc - based methods of andrieu et .al ( 2010 ) and also to see whether techniques used to improve pmcmc methods can be used to improve ensemble methods and vice versa .multivariate versions of stochastic volatility models , for example those considered in scharth and kohn ( 2013 ) are another class of models for which inference is difficult , and that it would be interesting to apply the ensemble mcmc method to .we have done preliminary experiments applying ensemble methods to multivariate stochastic volatility models , with promising results . for these models , even though the latent process is linear and gaussian , due to a non - constant covariance matrix the observation process does not have a simple and precise mixture of gaussians approximation .this research was supported by the natural sciences and engineering research council of canada .a. s. is in part funded by an nserc postgraduate scholarship .r. n. holds a canada research chair in statistics and machine learning .0.2 in andrieu , c. , doucet , a. and holenstein , r. ( 2010 ) .`` particle markov chain monte carlo methods '' , _ journal of the royal statistical society b _ , vol .72 , pp .269 - 342 .kastner , g. and fruhwirth - schnatter , s. ( 2014 ) .`` ancillarity - sufficiency interweaving strategy ( asis ) for boosting mcmc estimation of stochastic volatility models '' , _ computational statistics & data analysis _76 , pp . 408 - 423 .kim , s. , shephard , n. and chib , s. ( 1998 ) .`` stochastic volatility : likelihood inference and comparison with arch models '' , _ review of economic studies_. vol .65 , pp . 361 - 393 .lindsten , f. and schon , t. b. ( 2013 ) .`` backward simulation methods for monte carlo statistical inference '' , _ foundations and trends in machine learning_. vol .6(1 ) , pp . 1 - 143 .liu , j.s . and sabatti , c. ( 2000 ) .`` generalized gibbs sampler and multigrid monte carlo for bayesian computation '' , _ biometrika _ , vol .87 , pp .353 - 369 .liu , j.s . and wu , y.n .`` parameter expansion for data augmentation '' , _ journal of the american statistical association _ vol .94 , pp .1264 - 1274 .neal , r. m. ( 2003 ) . `` markov chain sampling for non - linear state space models using embedded hidden markov models '' , technical report no . 0304 , department of statistics , university of toronto , http://arxiv.org/abs/math/0305039 .neal , r. m. , beal , m. j. , and roweis , s. t. ( 2004 ) .`` inferring state sequences for non - linear systems with embedded hidden markov models '' , in s. thrun , et al ( editors ) , _ advances in neural information processing systems 16 _ , mit press .neal , r. m. ( 2010 ) .`` mcmc using ensembles of states for problems with fast and slow variables such as gaussian process regression '' , technical report no . 1011 , department of statistics , university of toronto , http://arxiv.org/abs/1101.0387 .omori , y. , chib , s. , shephard , n. and nakajima , j. ( 2007 ) . stochastic volatility model with leverage : fast and efficient likelihood inference , _ journal of econometrics _ ,vol . 140 - 2 , pp . 425 - 449 .petris , g. , petrone , s. and campagnoli , p. ( 2009 ) . _ dynamic linear models with r _ , springer : new york .scharth , m. and kohn , r. ( 2013 ) .`` particle efficient importance sampling '' , arxiv preprint 1309.6745v1 .shestopaloff , a. y. and neal , r. m. ( 2013 ) .`` mcmc for non - linear state space models using ensembles of latent sequences '' , technical report , http://arxiv.org/abs/1305.0320 .yu , y. and meng , x. ( 2011 ) .`` to center or not to center , that is not the question : an ancillarity - sufficiency interweaving strategy ( asis ) for boosting mcmc efficiency '' , _ journal of computational and graphical statistics _ , vol .20 ( 2011 ) , pp .531 - 570 .here , we derive the sufficient statistics for the stochastic volatility model in the two parametrizations and the likelihoods in terms of sufficient statistics .
|
in this paper , we introduce efficient ensemble markov chain monte carlo ( mcmc ) sampling methods for bayesian computations in the univariate stochastic volatility model . we compare the performance of our ensemble mcmc methods with an improved version of a recent sampler of kastner and fruwirth - schnatter ( 2014 ) . we show that ensemble samplers are more efficient than this state of the art sampler by a factor of about , on a data set simulated from the stochastic volatility model . this performance gain is achieved without the ensemble mcmc sampler relying on the assumption that the latent process is linear and gaussian , unlike the sampler of kastner and fruwirth - schnatter . the stochastic volatility model is a widely - used example of a state space model with non - linear or non - gaussian transition or observation distributions . it models observed log - returns of a financial time series with time - varying volatility , as follows : here , the latent process determines the unobserved log - volatility of . because the relation of the observations to the latent state is not linear and gaussian , this model can not be directly handled by efficient methods based on the kalman filter . in a bayesian approach to this problem , we estimate the unknown parameters by sampling from their marginal posterior distribution . this distribution can not be written down in closed form . we can , however , write down the joint posterior of and the log - volatilities , and draw samples of from it . discarding the coordinates in each draw will give us a sample from the marginal posterior distribution of . to sample from the posterior distribution of the stochastic volatility model , we develop two new mcmc samplers within the framework of ensemble mcmc , introduced by neal ( 2010 ) . the key idea underlying ensemble mcmc is to simultaneously look at a collection of points ( an `` ensemble '' ) in the space we are sampling from , with the ensemble elements chosen in such a way that the density of interest can be simultaneously evaluated at all of the ensemble elements in less time than it would take to evaluate the density at all points separately . previously , shestopaloff and neal ( 2013 ) developed an ensemble mcmc sampler for non - linear , non - gaussian state space models , with ensembles over latent state sequences , using the embedded hmm ( hidden markov model ) technique of neal ( 2003 ) , neal et al . ( 2004 ) . this ensemble mcmc sampler was used for bayesian inference in a population dynamics model and shown to be more efficient than methods which only look at a single sequence at a time . in this paper we consider ensemble mcmc samplers that look not only at ensembles over latent state sequences as in shestopaloff and neal ( 2013 ) but also over a subset of the parameters . we see how well both of these methods work for the widely - used stochastic volatility model .
|
recent advances in numerical simulations of black holes in general relativity have led to many interesting results .most of these simulations have been carried out with finite - difference methods .however , the vacuum einstein equations have mathematically smooth solutions ( unless pathological coordinates are chosen ) .accordingly , one expects that spectral methods should be optimal in terms of efficiency and accuracy .einstein s equations are a hyperbolic system involving second derivatives in space and time .however , the numerical solution of hyperbolic systems using spectral methods is normally performed with a fully first - order formulation , even when the equations are naturally higher order . reducing the order of the equationsis usually achieved by introducing new variables defined as first - order time or space derivatives .the basic impetus for this first - order reduction is that there exists a well - established body of mathematical literature for first - order hyperbolic systems , which includes methods for analyzing the well - posedness of the equations and the proper way to impose stable boundary conditions in terms of characteristic variables .the obvious disadvantage of the first - order reduction is the introduction of additional variables , whose definitions ( at least for spatial derivatives ) become constraints the solution must satisfy and thus new possible sources of instability in the system .furthermore , each new variable must be evolved , increasing the number of equations and the computational cost of the simulations . in some cases, this can be a substantial increase .successful simulations of einstein s equations using spectral methods have thus far been implemented only as first - order reductions of the second - order system . in the case of the generalized harmonic form of the equations , the reduction to first order in space proceeds by introducing 30 additional variables , more than doubling the number of equations and constraints in the system .these simulations typically require significant computational time , upwards of a hundred cpu - weeks for high resolution runs .a first order in time , second order in space system has the potential to reduce the constraint - violating instabilities and the computational expense of the simulations .however , the mathematical knowledge underlying the proper formulation for such systems is much less developed .recently , gundlach and martn - garca have proposed and analyzed definitions of symmetric hyperbolicity for a general class of second order in space systems .they have also shown how one may define characteristic modes in the second - order system and thereby formulate stable boundary conditions at the continuum level .there still remains the problem of how to impose the boundary conditions in the discrete system ( using spectral methods ) .even for the simplest representative hyperbolic system , the second order in space wave equation , naive attempts to impose boundary conditions in the same way as in a first - order formulation generally fail .the difficulty is not due solely to the presence of second derivatives .for example , methods exist for treating the second order spatial derivatives in the navier - stokes equations directly using spectral methods . however, these techniques do not apply to the wave equation , as the characteristic structure is fundamentally different . in this work, we present a new method for imposing boundary conditions in the second - order wave equation that is robust , stable , and convergent .since the generalized harmonic form of einstein s equations appears as ten nonlinear coupled wave equations , this work provides a foundation for solving einstein s equations directly in second - order form using spectral methods .this application will appear in a subsequent paper .it is likely that the work presented here will also allow other formulations of einstein s equations , such as the bssn ( baumgarte - shapiro - shibata - nakamura ) formulation , to be treated by spectral methods without reduction to first - order form . in section [ fosh1d ]we review a typical spectral method for evolving the fully first - order form of the one - dimensional wave equation .we review how boundary conditions can be imposed using penalty methods and how stability of the system can be analyzed with energy methods . in section [ mainresult ]we present the new penalty method for the one - dimensional second order in space wave equation and prove stability of the system using energy arguments . in section [ 3dswf ]we generalize the method to three dimensions , and in section [ 3dswc ] we apply the method to the case of a scalar wave on a curved background .we begin with the one - dimensional wave equation = ^ , [ 1dwave ] where . here ,dots denote differentiation with respect to , while primes denote differentiation with respect to .we will first review a typical first - order pseudo - spectral method for evolving this equation before discussing the second - order formulation .the wave equation in eq .reduces to first order by introducing the variables and where & - , [ pidef1 ] + & ^. [ 1dvarphi ] the negative sign in the first equation is purely a matter of convention .the first - order system is thus & = - , [ 1deq1 ] + & = -^ , [ 1deq2 ] + & = -^. [ 1deq3 ] equation is just the definition of , while the definition of in eq .amounts to the addition of a constraint to the system , where ^- .the characteristic variables and speeds for this system ( see e.g. ref . ) are u_= & , & & = 0 , + u_= & n^x , & & = 1 .[ upmdef1d ] here , is the unit outgoing normal vector to the boundary , which in one dimension is just . withthis definition , is incoming ( ) at each boundary . for a symmetric hyperbolic system on a domain with boundary , there exists a ( not necessarily unique ) conserved , positive definite energy e= _ dv , which is conserved in the sense that = _ if^i .[ fluxdef ] accordingly , the time derivative of the energy is given by the flux through the boundary , e = _ f^n da , where .note that for general quasi - linear systems , the energy is only strictly conserved when coefficients in the equations are approximated as constant and lower - order terms are neglected .for the one - dimensional wave equation in eqs .- , the energy density is = ( ^2 + ^2 ) .[ 1denergy ] using eqs . , , , and , we get f^x = -= ( u_-^2 - u_+^2 ) . if we consider our domain to be the interval ] .we begin by writing the semi - discrete energy corresponding to eq . :e = , [ foshsde ] where represents a discrete inner product , as in , _ i=0^n _ i _ i^2 . here are the grid values of the function , and are the quadrature weights ( see appendix [ appgllnotes ] ) . taking the time derivative of the semi - discrete energy in eq . , we obtain e = & -_i _i |_i=0^n + , ( _ i0+_in)u_- + & - , ( _ i0+_in)n^xu_- , [ fosheeq1 ] where we have used summation by parts ( the discrete analogue of integration by parts ) in the first term and introduced the notation in the penalty terms . the first term in eq .is similar to the continuum result : -_i _ i in the last two terms in eq . yields e _ = _i=0,n u_- u_- , where we have written for the quadrature weight at . noting that u_-u_- = ( u_-^^2 - u_-^2 - u_-^2 ) , we put things together to find e = _ i=0,n .[ foshedot ] the condition on the penalty factor for stability depends on the boundary condition we impose on . requiring , we find : u_-^ & = 0 c , + u_-^ & = u_+ c , where .the strictest condition is obtained by insisting that the energy be bounded by the continuum energy in eq . for arbitrary : e e _ c=. [ ccond ] the situation is slightly different when considering the semi - discrete energy for a multi - domain problem .for example , suppose we consider the interval ] and ] , then the jacobian of the mapping is inherited from the continuum inner product : f , g _i f_i g_i ^_i .[ 1dmodip ] since the penalty terms in eq .contain kronecker deltas that pick out specific terms from the sums , the values for we obtain would need to be modified by a jacobian factor : . for simplicity, we will assume that the domain is the fundamental interval ] , with an inner boundary at , outer boundary conditions , and grid points per subdomain . results for legendre- and chebyshev - lobatto grids are shown for comparison . ]let us now consider the three - dimensional second order in space wave equation = & - , + = & -_i^i .the characteristic modes and speeds of this system are u_= & , & & = 0 , [ upsidef ] + u_= & n^i_i , & & = 1 , [ upmdef ] + u^0_i = & _ i- n_i n^j_j , & & = 0 , [ u0def ] where is the outward - directed unit normal to the boundary .these are the same as the characteristic variables of the first - order system obtained by defining .usually , one thinks of characteristic variables as being defined only for first - order systems , but they can be generalized to second - order systems .one way to do this is to define the second - order modes as those combinations of variables that satisfy u = -n^i _ i u + , [ fotsoscharvar ] where the dots represent derivatives transverse to plus lower order terms . as a consequence of this definition , the transverse derivatives are automatically zero - speed modes ( in fact, they can be given arbitrary speeds ) .moreover , the characteristic variables in eq .are unique only up to addition of these zero - speed modes .for example , we could redefine as for arbitrary ( fixed ) . as discussed in ref . , this ambiguity is removed for a symmetric hyperbolic system by requiring the existence of a conserved energy that is quadratic in the modes . here , that amounts to taking the definitions in eqs . - as they are .the conserved energy density is = ( ^2 + ^i_i ) .[ 3dswfenergy ] note that this energy is indeed quadratic in terms of the characteristic modes : = ( u_+^2 + u_-^2 ) + u^0iu^0_i . in analogy with the one - dimensional case in eq . , the flux is e = _( u_-^2 - u_+^2)d^2x , where represents the boundary of the domain .now consider the semi - discrete problem in three - dimensions .we encounter a few issues in generalizing from the one - dimensional case .for one , if the boundary of the domain contains edges or corners , the normal vectors there ( and hence characteristic modes ) are not well - defined . for reasons that will become clear below, we resolve this ambiguity by defining the normal vectors as follows .we will use upper case and lower case to denote the unnormalized and unit normal vectors , respectively . for simplicity ,suppose the domain is a cube with ] , and the successive resolutions have , and legendre - lobatto grid points per subdomain along each dimension . in this test , and .the error is a moving average over an interval , which includes data points . ]a few empirical observations are worth noting . in practice , we find that the bulk penalty terms arising from edges and corners in eq . are not actually necessary for obtaining stable , convergent evolutions . in all the tests we have performed for scalar waves in flat space , the terms due to the faces of the boundary are sufficienthowever , the additional terms in eq . may need to be included for complicated domain decompositions or in curved space applications .for more general systems of quasi - linear wave equations ( such as einstein s equations in generalized harmonic form ) , we find that it is sometimes necessary to include a boundary term enforcing continuity of the field in the penalty .that is , one makes the replacement in the penalties . in the tests that we have performed ,this is not required for a simple wave equation in flat ( or curved ) space .an alternative to defining unique normal vectors on corners and edges is to use a so - called multi - penalty method . with a multi - penalty method , boundary conditions ( and hence penalties ) on edges and cornersare defined to be the sum of those from the adjacent boundary faces .while this has the advantage of avoiding some of the issues with corners and edges , it makes obtaining analytical results such as eq . more difficult .although we have not yet fully tested this alternative in curved space applications , we find that the multi - penalty method performs equally well for scalar waves in flat space .in this section we consider the application of the new penalty method to the evolution of a scalar wave on a fixed , curved background spacetime : _^= 0 , [ curvedwaveeq ] where is the four - dimensional covariant derivative . in rewriting this equation as a first - order system, we use the standard splitting of the metric : ds^2 = -^2 dt^2 + _ ij(dx^i + ^i dt)(dx^j+^j dt),[3p1metric ] where is the lapse function , is the shift , and is the three - dimensional metric intrinsic to the constant time spatial hypersurfaces .it is assumed that and that the three - metric is positive definite . the wave equation in eq .can be rewritten in a standard way as the first - order system = & -+ ^i_i , [ foshcurvedeq1 ] + = & -^ij_i_j + ^i_i+ k + j^i _ i , [ foshcurvedeq2 ] + _ i = & - _ i - _ i + _ k_i^k + ^k_k _ i. [ foshcurvedeq3 ] equation is just the definition of the variable . as usual ,the spatial derivative variable is defined as _ i _ i .the quantities and in eq . are purely functions of the background spacetime : k & - , + j^i & - _ j ( ^1/2^ij ) , where in deriving eq . , the equivalence of interchanging indices in _i_j = _ j _ i has been assumed .this reduction to first order has therefore introduced two constraints to the system : , where _ i & _ i - _ i , [ con1 ] + _ ij & _ i_j - _ j_i.[con2 ] the second - order in space equations are = & -+ ^i_i , [ curvedeq1 ] + = & -^ij_i_j+ ^i_i+ k + j^i _ i. [ curvedeq2 ] this system avoids the introduction of the constraints in eqs . - as well as the third set of evolution equations in eq . .the characteristic variables and speeds of the second - order system are the same as those of the equivalent first - order reduction with : u_= & , & _ 0 & = -n^k_k [ upsidefswc ] + u_= & n^i_i , & _ & = - n^k_k , [ upmdefswc ] + u^0_i = & _ i- n_i n^j_j , & _ 0 & = -n^k_k,[u0defswc ] where is the outward - directed unit normal vector to the boundary of the three - dimensional spatial domain .these are the same as the characteristic modes of the scalar wave in flat space in eqs . - , with modified characteristic speeds .as discussed in section [ 3dswf ] above , the `` zero - speed '' modes can be considered to have arbitrary speeds in the second - order system .the speeds given above are chosen to be the same as those of the corresponding first - order system . additionally , these are the coefficients that appear in the boundary flux of the energy , and in this sense they are the preferred choice .the energy density for this system is the same as for the flat space scalar wave in eq . := ( ^2 + ^i_i ) .[ curvedeps ] the energy flux is found by computing the time derivative of the energy e = _^1/2 d^3x , where is the spatial domain under consideration and is the volume element .in addition to a boundary flux , differentiating the energy gives rise to volume terms that depend on various derivatives of the background ( , or ) .however , these volume terms can all be bounded by multiples of the energy itself ( or neglected entirely in the constant - coefficient approximation ) , which is all that is required for proving well - posedness .one therefore obtains e - _ f^n ^1/2 d^2x + ke , [ edotswc ] for some constant .the flux integrand is f^n = _ - u_-^2 + _+ u_+^2 + 2 _ 0 u^0ju^0_j , [ fluxswc ] and the element of area in eq .is , where and is the intrinsic metric on the boundary surface .the continuum problem is therefore well - posed with boundary conditions that control incoming modes ( those with ) . for a timelike boundary , a boundary condition is needed on and possibly on , depending on the sign of .for a spacelike boundary , either all modes are incoming , or all modes are outgoing and no boundary conditions are required ( e.g. on an excision boundary inside the horizon of a black hole ) . we could also have included a term in the energy density , replacing eq . by = ( a^2 ^2 + ^2 + ^i_i ) .[ curvedepspsi2 ] this would give an additional term in : _a^2 ^1/2 d^3x = _ a^2 ( ^i_i-)^1/2 d^3x.[swcpsi2epart ] integrating by parts in the first term on the right - hand side yields _n_i^i ^2 ^1/2 d^2x - _ ^2 _ i ( ^i ^1/2 ) d^3x . [ psi2bdryflux ] the latter term in this expression can be bounded by a multiple of the energy , while the first term contributes to the boundary flux .it may seem , then , that including the term in the energy density would require the flux of eq . to be modified .however , the entire right - hand side of eq .can in fact be bounded in the volume . making use of the relation , we find _ a^2 ( ^i_i- & ) ^1/2 d^3x + & a ( _ + || _ ) e. [ psi2inequality ] the addition of a term to the energy density therefore requires the constant in eq . to be modified , but not the flux .consequently , our conclusions about well - posedness and boundary conditions remain unchanged .it is interesting to note , however , that the same does not hold for the first - order system of eqs .- , because the first - order energy corresponding to eq .controls , but not ( and therefore the inequality in eq . does not follow ) .the penalties in the semi - discrete equations need to be slightly modified from those of the flat space scalar wave system in eqs . - . to see how , consider the semi - discrete equations corresponding to eqs . - with penalty functions : _ i = & -_i + + p , [ sdcurvedeq1 ] + _ i = & -^jk_j_k_i + + q. [ sdcurvedeq2 ] as usual , is it to be understood that the fields represent grid values ( e.g. ) , and differentiation is implemented , for example , by matrix multiplication . for simplicity , we will assume that the physical domain under consideration has been mapped to the cube with ] are the legendre polynomials .this is a convenient choice for obtaining analytical results because the legendre polynomials are orthogonal under a weighting function of unity : _-1 ^ 1 ( x ) p_n(x)p_m(x)dx = _ nm , with .the -point quadrature rule , _ -1 ^ 1 u(x)dxiu(x_i ) , is exact if is a polynomial of degree or less . the nodes are x_0 = & -1 , + x_n = & + 1 , + x_i = & p_n^(x ) + & 0 < i < n , and the weights are given by _note that there is no known explicit formula for the roots of must be found numerically .a function is approximated by an -order interpolating polynomial , which can be expressed as _n(x ) = _ i=0^n ( x_i ) c_i(x ) , where are cardinal functions satisfying .they can be written as c_i(x ) = .differentiation can be computed via matrix multiplication from ^_n(x_i ) = _ j=0^n d^(1)_ij ( x_j ) , where is the first - order differentiation matrix . the second - derivative matrix is defined similarly and satisfies an efficient algorithm for computing pseudo - spectral differentiation matrices is given in ref . .if are two -order polynomials , summation by parts follows naturally because the product is a polynomial of order or less : f , g^= _ i=0^n _ if_ig^_i = f_ig_i |_i=0^n - f^ , g .summation by parts generalizes to higher dimensional inner products in a straightforward way .for example , if and are -d polynomials in and : _ xf , g = & _ i , j=0^n _ i _ j ( _ xf)_ij g_ij + = & _ j=0^n _ j ( fg)|_i=0^n - f , _ x g .in this section we will show that in two or more dimensions the inner product ^i_i , p that arises in the energy arguments discussed in this paper can not be made to vanish in general with a penalty function that satisfies the boundary conditions .we will argue by counting degrees of freedom . for simplicity, consider the two - dimensional case and let the domain be a square with grid points along each dimension . instead of using a basis of legendre polynomials ,consider a ( non - orthogonal ) basis of functions . a scalar field is thus approximated on the grid as a two - dimensional interpolating polynomial of the form = _ 0 i , j n a_ij x^i y^j . [ psiexpan ] there are basis functions and hence the same number of degrees of freedom in the function .the penalty function must satisfy boundary conditions on the square .now consider operating on the expansion of in eq . with the laplacian .the effect of this operation on a term is essentially x^i y^j x^i-2y^j + x^i y^j-2 .[ xyterm ] since we are only interested in counting the degrees of freedom that remain in , we only need to retain one of the terms in eq .: x^i y^j x^i-2y^j .[ xyterm2 ] by doing this , we will at worst undercount the degrees of freedom in .this leaves terms of the form for and , which implies that there are at least degrees of freedom remaining in the laplacian .there are thus at most degrees of freedom for constructing a penalty function that is orthogonal to ( for arbitrary ) , which is not enough to satisfy the boundary conditions .the same argument can be applied in any number of dimensions . in particular ,in the three - dimensional case we find that there are at most degrees of freedom for constructing the penalty function not enough to satisfy the boundary conditions , which proves the assertion made below eq . .in this section the form of the three - dimensional bulk penalty given by eq . will be derived .the goal is to minimize the inner product ^i_i , p , with the values of the penalty function on the boundary given .first , we will revisit the one - dimensional problem on the interval ] . in the following we will use the index exclusively for summing over values and for .our goal is to construct the values of on the interior of the domain so as to minimize the inner product , p = _ ij_i_j _ij p_ij , [ ip ] where represents the laplacian operator , and we consider the values of on the boundary to be given .consider a point on the edge at , for example .the term in the inner product due to this point is _ 0 _ j _ 0j p_0j . [ edgept ] now define on the interior along the row to be p_ij = p_0j f_i , just as in the one - dimensional case . using the identification of as interpolation weights from eq . , the contribution to the inner product from the interior of this row is which approximately cancels the term from the point on the edge in eq . . in eq .we have written for the interpolation weights defined in eq . .next , consider a point at a corner , say .the term in the inner product due to this point is _ n _ 0 _ n0 p_n0 .[ cornerpt ] define on the interior of the domain to be p_ij = - p_n0 g_i f_j .the contribution of to the inner product on the interior of the square is now which approximately cancels the contribution from the point on the corner in eq . .following this procedure , we construct on the interior by adding a contribution from each boundary segment : edges and corners on this -d domain . explicitly , this gives this generalizes to three or more dimensions in a straightforward way .each term in has a number of products of or equal to the codimension of the boundary segment it depends on .the only caveat is that the sign of the terms added to should be , where is the codimension of the boundary piece producing the term .this is evident in the -d example above where the terms in eq . due to the corners are negative .the sign difference arises simply because of the negative sign in the relation between the interpolation weights and the functions in eqs . - .with the penalty function constructed according to the above procedure , the inner product of with _ any _ analytic function ( hence ) satisfies h , p 0 , n .[ hinprod ] in particular , we have shown that the last term in eq . asymptotically vanishes , as claimed below eq . .moreover , while we have not bounded the error for a given resolution , the inner product in eq . will be as small as possible in the sense that it vanishes for the polynomial approximations to up to order .
|
current spectral simulations of einstein s equations require writing the equations in first - order form , potentially introducing instabilities and inefficiencies . we present a new penalty method for pseudo - spectral evolutions of second order in space wave equations . the penalties are constructed as functions of legendre polynomials and are added to the equations of motion everywhere , not only on the boundaries . using energy methods , we prove semi - discrete stability of the new method for the scalar wave equation in flat space and show how it can be applied to the scalar wave on a curved background . numerical results demonstrating stability and convergence for multi - domain second - order scalar wave evolutions are also presented . this work provides a foundation for treating einstein s equations directly in second - order form by spectral methods .
|
a possible explanation for aging in biological populations is based on deleterious mutation accumulation whose effects are felt at late ages when the intensity of natural selection is lower : mutations in the genome causing diseases genetically programmed to happen at late ages will not prevent reproduction and these mutations may spread through the population .population dynamics always involve non - linear evolution equations and the attractors of these dynamical systems may develop correlations in the stationary solutions that can not be explained by the analysis of the equations_ per se_. on the other hand , analytical solutions to these equations may be very hard to obtain .the penna model for biological aging has been proposed as a tool to perform numerical simulations of dynamical systems describing age - structured populations subject to genetic mutations and to natural selection , allowing quantitative analysis of the possible attractors .it has been successfully applied to many different aspects of biological aging , considering asexual and sexual reproduction , haploid and diploid populations , different reproduction strategies , etc .. for recent reviews see .the asexual version of the penna model considers a bit - string associated to each individual of a given population .this bit - string is inherited from the individual s parent and contains information about the genetically programmed diseases that the individual will develop during its life .a one in the k bit of the string implies a disease to be switched on at age k. each individual dies whenever the disease happens in its life , that is , the locus of the bit set to one gives information about the inherited maximum age this individual may reach . to avoid an exponential growth of the population, a verhulst factor is considered through a survival probability given by , where is the total population at time and is the carrying capacity of the environment . at each timestep the living individuals grow older one time unit - usually called year - and the corresponding bit in its bit - string is read , testing for the genetically programmed death . after a juvenile period , ending at age , the individuals start to reproduce .different assumptions on the fecundity after have been considered to describe different situations ranging from constant fecundity up to age , with no reproduction after that , to increasing with age fecundity , describing trees or crustaceans that can grow very old .also , different protocols have been proposed to produce the offspring bit - strings .asexual reproduction is simulated by considering replicas of the parent bit - string except for bits that may suffer deleterious mutations .sexual reproduction protocols mix the information contained in both parents bit - strings and then consider mutations .there are many different results , for different parameters and protocols , but it is clear that in general the model grasps the central issue of decreasing natural selection intensity with age , yielding good results for the age structure of populations .analytical approaches to these models have been considered for the asexual and sexual versions of the penna model , confirming simulations results .many parameters must be considered in the models .some of them are inescapable ones : there are many different systems and species in nature and if one wishes to describe each of them as a consequence of such a general law as natural selection , the model to be proposed should allow for different sets of parameters , as mutation rate , for example .however , some of the parameters are intrinsic to the numerical techniques used , such as the length of the bit - strings . these parameters may be turned inoffensive if scaling could be found , that is , they could be related to natural units of the system . in a recent paper , malarz considered simulations of the penna model with different bit - strings lengths and concluded that a scaling may not be possible .when scaling is to be considered , it is always interesting to avoid discrete equations , since the size of the time steps relatively to some parameters may be relevant to the scaling .here we consider the asexual version of the penna model , as proposed in reference , and take its continuous time limit .it turns out that age must also be continuous , and bit - strings turn into strings of real numbers of semi - infinite length , over which a sum of dirac is defined .we show that the lack of scaling is a consequence of the mutation protocol used , when the mutation probability is proportional to the inverse of the bit - string length , . on the othar hand , when a poisson distributed mutation probability is considered , as suggested by bernardes and stauffer , we show that the results are independent of the string length considered , provided it is longer than the maximum possible age predicted for the population .we do not contradict previous work without scaling since that work used a mutation probability that is proportional to the inverse of the bit - string length .moreover , besides the scaling with the length of the bit - string , we also show that further scaling properties arise when an adequate continuous time limit is taken to the evolution equations , allowing the definition of as a natural time unit for each population .the paper is organized as follows . in the next sectionwe take the continuum limit of the discrete model for asexual populations by considering two possible continuous time evolutions . in section 3we analyze possible mutation operators and in section 4 we present solutions and discuss scaling properties . finally in section 5we discuss the results and conclude that a full scaling may be found only when adequate mutation protocol and time evolution equations are considered .an analytical approach for the asexual , discrete version of the penna model considers relative populations , given as where is the carrying capacity of the environment and is the number of individuals at time , with age and death age , that is , the age at which the bit one of its genome will be read .the time evolution of these populations is described by the following set of discrete time equations : here is the total population at time , that is , it is the sum over ages and death ages of .the equation for states how offspring is produced .the related equation is where the birth matrix is the probability that a parent with death age gives birth to a child with death age .when only bad mutations are considered , is triangular and it is possible to obtain the exact stationary solution for the above evolution equations , provided the birth matrix fulfills the following conditions : 1 . is a triangular matrix such that for , that is , parents can not give birth to offspring with larger life expectancy ( it corresponds to only bad mutation in the penna model ) .this is not biologically unrealistic for well adapted populations - advantageous mutations are expected to be extremely rare due to the large times required for noticeable species evolution ; 2 . , that is , the probability that the parent gives birth to offspring with its expected life length is different from zero .this condition is also expected in biological populations , and 3 . if , that is , the probability that a parent gives birth to offspring with its expected life length decreases with .in other words , the larger the parent expected life length , the larger the probability that a difference in the genetic charge of the offspring effectively reduces their expected life length ( the better the genetic code , the larger the number of events that can spoil it ) . in the original penna model ,mutations are implemented in the following way : for each newly produced bit - string , loci on the bit - string with bits are randomly chosen and set equal to 1 , regardless their previous state .the birth matrix for the asexual penna model can then be explicitly obtained by considering the probability of one mutation happening in a locus with bit zero , and estimating the change in death age caused by that mutation .the birth matrix thus obtained fulfills the above conditions .the results depend on the reproduction strategy . for a given initial reproduction age and same mutation rate , it can be proven that for final reproduction age , with constant fecundity after , there is a maximum possible life span for the stationary state . for initial populations containing ,if the population evolves towards an age structure that gives a survival rate fairly constant up to , and then decaying smoothly to zero as .if , however , or the initial population has a maximum life span smaller than , the survival rate for the stationary solution decreases smoothly up to or , where it abruptly decays to zero . in the special cases where or , reproduction happens for only one year and the whole population die after it : this is the case of semelparous species that present catastrophic senescence . herewe present a continuous version of this model , and discuss different ways of implementing mutations .we start by considering time as a continuous variable , hence age should also be continuous .as death can happen at any moment in time , the genome must be described by a real variable interval .to each individual we associate a genome function of a real variable , where the string length may possibly go to infinity .this function is a sum of dirac , that is where the locations of the , , represent the ages at which genetical diseases are switched on .we define the health status of an individual as such that the death age of the individual is reached when , that is , when the -function or the genetically programmed disease happens in its life .now is the relative number of individuals at time with age between and and death age between and .the total population is calculated from as where is the maximum death age in the population ( ) .there are two continuous time evolution equations for that are compatible with the discrete limit eqs .( [ eqdi ] ) , as we will discuss now .this approach is based on the product integration calculus proposed by volterra in 1887 .the first of eqs.([eqdi ] ) can be written in the continuous version as where the left hand side has been taken to the power and is a constant with dimensions of time such that in the discrete limit : is a convenient time unit . the above equation can be rewritten as where the total derivative can be displayed as clearly and . the stationary solution ( ) for eq.([dlog ] ) is then where we have dropped the time index for the stationary solutions .observe that this is in the same form as the solution for the discrete version of the model . a more orthodox way of writing the continuous limit for eqs .( [ eqdi ] ) is to rewrite them as follows where has the same role as in the volterra description : a convenient unit such that for the discrete limit time interval .the differential equation is with the total derivative analogously given as in eq .( [ dtotal ] ) .the stationary solution is the difference between stationary solutions eqs.([solm ] ) and ( [ sols ] ) is not important when , since in this case . on the other hand , observe that the above equations allow . both solutions depend on the newborn populations that in turn depend on the birth matrix , that is where is the number of offspring produced per fertile individual per time unit . before proceeding with the solutionlet us obtain .here we shall consider two ways of implementing mutations , an analogue to the discrete penna model and a poisson distributed probability of happening mutations .let us assume that the genome length is .one parameter of the model is the number of loci randomly chosen to assign a new on the genome .diversely from the discrete model , the probability of this happening on a locus already with a is vanishingly small .hence , the only possibility of the offspring death age being equal to that of the parent occurs when no new mutation happens before .the mutation operator for exactly one mutation is given as where is the step function ( equals to 1 if and 0 otherwise ) that guarantees that the child death age is always less or equal to that of the parent .the birth matrix for mutation is > from eqs.([mu ] ) and ( [ fmu ] ) it is clear the origin of the lack of scaling : as is varied , the mutation probability per unit of genome length varies . and as appears both as a linear coefficient and as an exponent , it is at least not trivial to find sets of parameters that would yield equivalent solutions . on the other handthis mutation protocol is too artificial : certainly there are genomes that do not suffer mutations at all as there could be some that suffer more than a fixed number of mutations .a more realistic assumption is to consider a fixed ( small ) mutation probability per unit of genome length , taken from a poisson distribution , as we consider in what follows .consider the probability of mutations happening in a given length of a genome as given by a poisson distribution where is the probability of occurring one mutation ( adding a new ) per unit of genome length . to build up the birth matrix , there are three possibilities to consider : _i ) _ if , then , since we consider only the possibility of adding new to the genome ( deleterious mutations ) ; _ ii ) _ if , meaning that no mutation occurs before and the offspring death age is the same as that of its parent , and finally _ iii ) _ if , when new are added to the genome before the occurrence of the parent disease . taking all the possibilities into account, the birth matrix can be written as \ , \end{aligned}\ ] ] where guarantees that if and .observe that mutational meltdown may be prevented since there is a non - zero probability that some offspring are bred without additional harmful mutations ( ) .it can be shown that the above birth matrix obeys the normalization condition that is the sum over all possibilities for the offspring genome .comparing eqs .( [ mu ] ) and ( [ pois ] ) , we observe that the mutation controlling parameters in the different mutation protocols are respectively and , that is , using poisson protocol the mutation probability decouples from the genome length .hence , since no longer appears in the mutation operator , the results do not depend on , and scaling should be expected .when only bad mutations are considered , there may exist a maximum life span to the steady state solutions that is independent of . in this case , the stationary solutions are not affected by bits located after and consequently are independent of , provided the mutation rate per genome length does not depend on .in other words , the dependence on seems to be responsible for the lack of scaling in eq.([mu ] ) when mutations are implemented as in the discrete version of the penna model . on the other hand ,further scaling properties can be found in the solutions of the evolution equations , provided the poisson mutation protocol is used , as we show in the next section .to have further insight of what happens we shall consider explicit solutions to the evolution equations .from now on we shall consider only poisson mutation protocol .we first rewrite eq.([bir1 ] ) for the newborn offspring using eq.([sols ] ) for the orthodox evolution : \;\ ; f(m , m ' ) \;\ ; x(0,m ' ) \ .\ ] ] we observe next that has dimensions of time , , , and have dimensions of time , while , , and have dimensions of time and is dimensionless .we can take as the time unit and rewrite all equations in dimensionless variables , which is equivalent to rewrite all equations setting and reading all quantities as given in time unities of .we then have one less parameter , and we remark that reading in units of links the mutation rate to the exponential constant of time evolution . even with one less parameter , eq.([bir2 ] ) is an integral equation , involving all newborn population and the birth matrix. we can solve this equation because is triangular , but we must find out the maximum possible value for the death age that any stationary solution may present .we do this in an analogous fashion to that used for the discrete version of the model .we assume that there is a maximum death age in the population .as is triangular - there are only bad mutations - as time proceeds this maximum value can not increase , although too long living individuals may disappear .offspring with death age equal to can be produced only by . using eq.([pois ] ) , the newborn equation for can be written as \;\ ; \exp{(-pm^ * ) } \;\ ; x(0,m^ * ) \ .\ ] ] the trivial solution , , is always possible .a nontrivial solution , , is possible if two conditions are met .first , = 1 \ , \ ] ] what guarantees that eq.([msta ] ) is satisfied , and a second condition that guarantees that , |_{m = m^ * } \geq 0 \ , \ ] ] resulting in all these calculations may be repeated for the volterra evolution equations .the equivalent equations to be solved have the same form as eqs.([cond1 ] ) and ( [ cond2 ] ) with replaced by , and the corresponding limit for the maximum life span in a given population is a limit for the life span of a population means that if , at initial times , the maximum life span in the population is , in the stationary solution the population goes to a state where .but if initially , the population remains with that maximum life span , since in this model only bad mutations are allowed to happen . for each possible ( )the total population may be obtained from eq.([cond1 ] ) or from its analogue for respectively orthodox or volterra evolutions . is the largest value of that satisfies both conditions ( [ cond1 ] ) and ( [ cond2 ] ) or their analogues .these two limits for the maximum life span for different evolution protocols depend differently on the value of the total population .we can observe some scaling properties regarding the maximum possible life span and total population . if we have , for every , then for the orthodox evolution on the other hand , for the volterra description the total population scaling is different : and the relation between population densities and is far from trivial : we note that for both evolutions the value of maximum life span does not depend on . hence varying the number of harmful diseases that the individuals may tolerate will not change the maximum life span , although it can change the survival rate for .the reason is that both conditions stated in eqs .( [ cond1 ] ) and ( [ cond2 ] ) are generated by the first term in the expression for the birth matrix , eq.([pois ] ) , that gives the probability for a child with the same death age as its parent .this term depends only on the parent death age , and not on the number of diseases that there are in the parent genome before .this is also in agreement with the fact that the increasingly better medical care has increased human life expectancy but has not equally changed the maximum life span . in the penna modelthe effect of proper medical care is equivalent to a shift of the maximum number of genetically programmed diseases that an individual may endure . in fig.([mmaxvsp ] ) the plots of maximum life span and total population as functions of mutation probability for different values of fecundity . the maximum life span decreases with increasing and decreasing . to fully appreciate the scaling present in the solutions we must solve eq.([bir2 ] ) . for that we first transform the integral equation into a differential one .consider a stationary population with , then for the birth equation can be rewritten as where is defined as such that , after some calculations we can write we now rewrite eq.([bir3 ] ) as we then differentiate times both sides , and use the fact that we can calculate the value of and all its derivatives at .we then arrive to the following differential equation : this equation may be numerically solved . to obtain the analogue of the above equation for the volterra evolution it suffices to replace by .nevertheless , in what follows we will consider only the orthodox evolution .version of the penna model , where scaling may not be found due to the somewhat artificial dependence of the mutation rate on the length of the genome . a second mutation protocol considers a poisson probability distribution , with mutations occurring with a fixed probability per unit of genome length . as this last mutation protocoldoes not depend on the genome length , the whole set of stationary solutions does not depend on .morever , in the special case of the orthodox time evolution and poisson mutation protocol , we could find further scaling relations such that the initial reproduction age may be taken as the natural time unit of the system . in this sense , all rates , as mutation and birth rates , are renormalized in units of .one direct consequence of this scaling is the correlation between earlier initial reproduction age and earlier senescence , an effect that is due to the scaling inherent to the model that shows up in the stationary solutions of the population dynamics equations .that is , here demographic effects only are responsible for this correlation and antagonistic pleiotropy effects are not needed .the generalization of sexual version of the penna model is now under investigation and shall appear elsewhere .the authors thank d. stauffer for suggesting this work and fruitful discussions with s. moss de oliveira , t.j.p .penna , and a.t . bernardes .this work has been partially finaced by brazilian agencies cnpq , capes and fapergs .b. charlesworth , _ evolution in age - structured populations _ , 2 edition , cambridge university press , cambridge ( 1994 ) .m. rose , _evolutionary biology of aging _ , oxford university press , new york ( 1991 ) .penna , j. stat .phys . * 78 * , 1629 ( 1995 ) .bernardes , _ in ann .physics iv _ , ed .d. stauffer world scientific , singapore , 359 ( 1996 ) .s. moss de oliveira , p.m.c . de oliveira , and d. stauffer , _ evolution , money , war , and computers _ , teubner , stuttgart - leipzig ( 1999 ) . m. argollo de menezes , a. racco , and t.j.p .penna , physica a * 233 * , 221 ( 1996 ) .r.m.c . de almeida , s. moss de oliveira , and t.j.p .penna , physica a * 253 * , 366 ( 1998 ) r.m.c . de almeida and c. moukarzel , physica a * 257*,10 ( 1998 ) k. malarz , int . j. mod .c * 11 * , 309 ( 2000 ) .a.t . bernardes and d. stauffer , int . j. mod .c * 6 * , 789 ( 1995 ) . v. volterra , rendiconto accademia dei lincei * 3 * , 393 - 396 ( 1887 ) .v. volterra , rendiconti del circolo mathematico di palermo * 2 * , 69 - 75 ( 1888 ) .dollard and c.n .friedman , _ product integration with applications to differential equations _, addison - wesley publishing company , london ( 1979 ) .k.w . wachter and c.e .finch , in _ between zeus and the salmon .the biodemography of longevity _ , national academy press , washington dc ( 1997 ) .m. azbel , proc .( usa ) * 96 * , 3303 ( 1999 ) .e. niewczas , s. cebrat , and d. stauffer , theory in bioscience * 119 * , 122 ( 2000 ) .
|
in this paper we consider a generalization to the asexual version of the penna model for biological aging , where we take a continuous time limit . the genotype associated to each individual is an interval of real numbers over which dirac are defined , representing genetically programmed diseases to be switched on at defined ages of the individual life . we discuss two different continuous limits for the evolution equation and two different mutation protocols , to be implemented during reproduction . exact stationary solutions are obtained and scaling properties are discussed . keywords : penna model ; biological aging + pacs05.40.+j ; 87.10.+e
|
speech enhancement in presence of background noise is an important problem that exists for a long time and still is widely studied nowadays . the efficient single - channel noise suppression ( or noise reduction )techniques are essential for increasing quality and intelligibility of speech , as well as improving noise robustness for automatic speech recognition ( asr ) systems .the noise reduction mechanism is designed to eliminate additive noise of any origin from noisy speech .most often the estimation of speech and noise signal parameters is performed in spectral domain .the successful example is the minimum mean square error ( mmse ) estimator , which utilizes statistical modeling of speech and noise frequency components .one of the most simple and popular approach is spectral subtraction , where noise spectral profiles are estimated during non - speech segments , and then subtracted from speech segments in magnitude domain .however this method is restrictive to the quasi - stationarity of the observed noise .some methods are based on semi - supervised noise reduction , where the target noise signals are presented to train the system .recently non - negative matrix factorization ( nmf ) had become a widely used tool for making useful audio representations , especially in context of blind source separation and semi - supervised noise reduction .the latest is considered to be a special case of separation problem with additive background noise as secondary source .nmf factorizes the given magnitude spectrogram into two non - negative matrices where columns of dictionary are magnitude spectral profiles , and contains gain coefficients .the key steps of semi - supervised nmf - based speech enhancement are 1 ) estimation of noise dictionary from the given training sample , then 2 ) factorizing input spectrogram using speech and noise dictionaries , i.e. and finally 3 ) reconstruct using clean speech magnitude estimation or by using it to filter the initial complex spectrogram . though these simple steps provide good results while reducing non - stationary noise sources , they do nt consider any additional knowledge about specific signal structure , lacking the interpretability of estimated components and degrading the final performance .several methods have been proposed to overcome these limitations by introducing regularization probabilistic priors and constrained nmf procedure for noise estimation .the promised results have been achieved by using non - negative hidden markov model framework that can effectively handle temporal dynamics . in the regularized nmf problem for speech and music separation is stated .this method uses normalized sparsity constraints on musical dictionary and gain factors for better discriminating between signals .however much fewer papers are dedicated to building appropriate constraints on target speech dictionary itself , rather than other nmf components .most natural constraints should enforce harmonicity of columns . in this paperwe adopt the approach proposed by in polyphonic music transcription task and further develop it to enhance noisy speech signals .the rest of the paper is organized as follows .the first section describes the underlying speech and noise models , which lead to nmf representation with linear constraints of dictionary columns .the multiplicative updates algorithm is applied to estimate unknown parameters . then the regularized extension of this model is proposed . in the next section experimental evaluations on noisy timit corpusare given .in conclusion we discuss about advantages and drawbacks of contributed method .this section considers speech and noise matrix dictionaries in which columns , or _ atoms _ follow specific linear models that are discussed below .+ the basic of speech production assumes that the time - domain excitation source and vocal tract filter are combined into convolutive model : the excitation signal itself could be presented by summing up complex sinusoids and noise on frequencies that are multiples of fundamental frequency : the quasi - stationarity of the transfer function and fundamental frequency function takes place during the short frame , so that and .putting together two representations and provides : where the hat symbol indicates fourier transform of the corresponding function .the approximation of signal in magnitude short - time fourier transform ( stft ) domain using window function is given as : localizes most energy at low frequencies .then we suppose that frequency response at any frame is modeled by the ( possibly sparse ) combination of fixed spectral shapes with non - negative weights , i.e. . in this casethe final representation formula is : the latest representation could be efficiently wrapped in matrix notation by taking discrete values and , since only one fundamental frequency is expected for the given time frame .it is possible to choose it from the discrete set of hypothesized fundamental frequencies bounded by and .this leads to possible combinations .therefore the following equation for the input speech spectrogram holds : each column of matrix represents one _ harmonic atom _ , in which isolated harmonics are placed in columns of matrices , weighted by constant amplitude matrix .the representation coefficients and gain matrix are needed to be defined .being not so physically motivated as speech model the noise model of signal is constructed in the similar way by assuming additivity of corresponding spectral shapes ( however this could also be theoretically approved by exploring band - limited noise signals , that is not the case of the current work ) .each time - varying noise magnitude is decomposed into sum of individual static components : note that is the predefined set of noise magnitudes extracted from the known noise signals , whereas non - negative filter gains should be defined from the observed data . as statedbefore the non - negative combination leads to the following representation : and in matrix notation for the same discrete and on input noise spectrogram : with represents noise dictionary with atoms , contains noise spectral shapes combined with unknown coefficients to produce noise model .here we introduce the general formulation of nmf problem with linear constraints that follows from speech and noise representations .as soon as spectrograms in both cases factorize by the product of two non - negative matrices , the -divergence between observed spectrogram and product could be chosen for approximation .here we only consider a special case called kullback - leibler divergence , but other divergences could be used as well .the following general optimization problem ( _ linnmf _ ) gives solution to and : with minimization over and .the factors are spanned by the columns of matrices that could be diverse for . using multiplicative updates heuristic , it could be shown that the following algorithm solves the optimization problem : where denotes -th row of matrix , is the all - ones matrix and multiplications and divisions are element - wise . by iterating these rules the factors modeled in linear subspace with dimensionality implied by rank of .it has been experimentally found that many solutions achieved by tend to reduce the rank of corresponding subspace .in other words , each tends to have sparse representation in basis .it is not the desired solution in the current task , in case if contains windowed - sinusoid magnitude values on the particular frequency .as we want to extract full harmonic atoms from signal , it is expected that every harmonic has non - zero amplitude ( especially in low - frequency band ) .the _ densenmf _optimization task is proposed to overcome this problem that favors non - zero coefficients in vectors : the following rules are also derived from multiplicative updates for -normalized coefficients : where indicates the vector of all - ones of the same size as .it should be noted that convergence properties of presented algorithms are not studied .however during the experiments the monotonic behavior of the target function has been permanently observed for arbitrary .the actual speech enhancement follows the similar procedure as described in and .the initial noise shapes are preliminary extracted from the noise - only magnitude spectrograms .these shapes are used directly to form noise atoms as in .harmonic atoms are constructed using fourier transform of hann window shifted on and scaled by that approximates triangle pulse excitation .this process requires the following parameters to be set : * the set of hypothesized fundamental frequencies that consists of frequency limits and , and points equally spaced between these limits . as we are going to show in experimental evaluations , parameter has drastic effect on noise suppression performance . *the number of harmonic atoms to be estimated per each hypothesized fundamental frequency . * the number of harmonics .for each atom is chosen , where is the sampling rate . the constrained speech and noise dictionaries and are then combined into matrix ] is randomly initialized . using iterative updates or all representation coefficients and gain matricesare obtained using different sparsity parameters and for speech and noise components .finally the denoised signal is made via inverse stft of wiener - filtered input ( see for details ) .+ figure [ fig : spectrograms ] illustrates the denoised spectrograms of short excerpt of speech contaminated by non - stationary noise . to demonstrate the performance we have tested the proposed nmf - based algorithms on timit corpus .1245 sentences from 249 speakers were mixed with various types of non - stationary noises , taken from airport , car , train , babble and street backgrounds .we have used signal - to - noise ratio ( snr ) as objective quality measure .four types of input snr are explored : -5db , 0db , 5db , 15db .the algorithms parameter settings are the following .the sampling rate of all signals is stft was applied to noise and noisy signals with 32 ms hanning window and 75% overlap .then noise matrix with = 16 columns is trained using non - constrained nmf on 10 seconds noise spectrogram of known type of noise .all atoms are constructed using the following parameters ( see section [ sseq : speech_enhancement ] ) : = , = , = 33 , = 4 , and = 30 for harmonic atoms , and = 16 for noise atoms. then _ linnmf _ and _ densenmf _ were applied to input noisy magnitude by doing and by fixing the number of iterations to 25 . = 0.2 sparsity coefficient was chosen for speech component , and = 0 for noise component according to . for _ densenmf_ algorithm the regularization parameter was set to , that was found experimentally .besides proposed algorithms we have also included two over nmf - based algorithms that serve as baseline .the first one uses known speech dictionary estimated directly from the same clean signal .then nmf computes only gain matrices and , followed by the same reconstruction .this is called _ oracle _ speech enhancement in this evaluations .the second one , called here _ nmf _ , is also nmf - based noise reduction with unknown speech dictionary which is similar to those described in and . also _ mmse _ technique has been included as reference , though this algorithm is completely blind .the figure [ fig : snr - on - timit ] demonstrates the snr performance of proposed algorithms .it is shown that for low snr _ densenmf _gives optimistic results , outperforming sometimes even _ oracle _ evaluations .however it is not so surprising because oracle algorithm actually does nt contain any intrinsic speech or noise model , poorly discriminating between both signals . for highersnr _ densenmf _ starts to perform worse than proposed _linnmf _ and even than baseline algorithms .it probably happens due to underestimation of non - harmonic speech components .finally we have studied how the number of harmonic atoms ( ) and different values of sparsity parameter affect _ densenmf _ noise suppression performance .ranging from 2 to 100 with , it has resulted to 10 to 500 atoms in total .we have expected that higher number of atoms would give better results for high sparsity and worse for low sparsity , since the high number of sinusoids tend to overestimate the presented noise .this expectations are confirmed by the results depicted on figure [ fig : num - vs - sparsity ] .indeed , for the maximum appears in more dense mixture of harmonic atoms , whereas lesser values of have peaks at around , and then rapidly decrease when more atoms are involved .note that though setting seems to degrade the overall performance , actually we observed that using this sparsity and high number of atoms gives more audibly pleasant result with much less noise but bit more distorted harmonics , as also could be observed from figure [ fig : spectrograms]d .hence other objective quality metrics such as in should be investigated in the future works .the main contributions of this paper are following .first we have applied prior deterministic modeling of speech and noise signals inside nmf - based speech enhancement framework .it has led to linearly constrained columns or atoms of corresponding dictionaries and .then we have proposed the new optimization problem statement and multiplicative updates algorithm that regularizes the representation coefficients so that they contain as fewer zeros as possible , i.e. , acquiring dense solution .we have tested the new method on timit corpus mixed with noises on different snr , achieving the best result for low snr among some state - of - the - art noise suppression algorithms , and slightly outperforming _ oracle _ nmf - estimator with known clean signals .however , despite the proposed nmf regularizer , most of the extracted speech atoms are still not purely harmonic .it is a complicated problem in order to develop more efficient speech enhancement method , as well as using these atoms for further analysis .therefore we will try to put some other nmf constraints that enforce spectral envelope smoothness in time and frequency to achieve more reliable results in future works .10 ephraim , y. , malah , d. speech enhancement using a minimum - mean square error short - time spectral amplitude estimator , ieee trans . on acoustic , speech and signal proc ., 32(6):11091121 , 1984 boll , s. suppression of acoustic noise in speech using spectral subtraction , ieee trans . on acoustics speech and signal proc ., 27(2):113120 , 1979 schmidt , m.n . ,olsson , r.k . single - channel speech separation using sparse non - negative matrix factorization , in proc .of interspeech , pp .26142617 , sep .2006 schmidt , m.n . , larsen , j. , hsiao , f .- wind noise reduction using non - negative sparse coding , ieee workshop on machine learning for signal proc . , pp . 431436 ,2007 cauchi , b. , goetze , s. , doclo , s. reduction of non - stationary noise for a robotic living assistant using sparse non - negative matrix factorization , in proc . of the 1st workshop on speech and multimodal interaction in assistive environments ,pp . 2833 , 2012 wilson , k.w . ,raj , b. , smaragdis , p. , divakaran , a. , speech denoising using nonnegative matrix factorization with priors , in proc . of int .conf . on acoustic , speech and signal proc . , pp . 40294032 , 2008 mohammadiha , n. , gerkmann , t. , leijon , a. a new approach for speech enhancement based on a constrained nonnegative matrix factorization , int . symp .on itelligent signal proc . andcommunications systems , pp . 15 , 2011 mysore , g.j ., smaragdis , p. a non - negative approach to semi - supervised separation of speech from noise with the use of temporal dynamics , ieee int . conf . on acoustics , speech and signal proc . , pp . 1720 , 2011 weninger , f. , jordi , f. , schuller b.r . supervised and semi - supervised suppression of background music in monaural speech recordings , ieee int .conf . on acoustics , speech and signal proc . , pp . 6164 , 2012 bertin , n. , badeau , r. , vincent , e. fast bayesian nmf algorithms enforcing harmonicity and temporal continuity in polyphonic music transcription , ieee workshop on app .of signal proc . to audio and acoustics , pp . 2932 .2009 gold .b. , morgan , n. speech and audio signal processing , chapter 3 , john wiley&sons , new york , 2000 mcaulay , r.j . ,quatieri , t.f . speech analysis / synthesis based on a sinusoidal representation , ieee trans . on acoustic , speech and signal proc ., 34:744754 , 1986 fevotte , c. , idier , j. algorithms for nonnegative matrix factorization with the beta - divergence , neural computation , mit press , 32(3):124 , 2010 timit acoustic - phonetic continuous speech corpus .online : www.ldc.upenn.edu/catalog/catalogentry.jsp?catalogid=ldc93s1 hu , y. , loizou , p. , evaluation of objective quality measures for speech enhancement , ieee trans . on speech and audio proc . , 16(1 ) : 229238 , 2008
|
this paper investigates a non - negative matrix factorization ( nmf)-based approach to the semi - supervised single - channel speech enhancement problem where only non - stationary additive noise signals are given . the proposed method relies on sinusoidal model of speech production which is integrated inside nmf framework using linear constraints on dictionary atoms . this method is further developed to regularize harmonic amplitudes . simple multiplicative algorithms are presented . the experimental evaluation was made on timit corpus mixed with various types of noise . it has been shown that the proposed method outperforms some of the state - of - the - art noise suppression techniques in terms of signal - to - noise ratio . * index terms * : speech enhancement , sinusoidal model , non - negative matrix factorization
|
rewriting codes are a coding - theoretic approach to allow rewriting to memories which have some type of write restriction , typically values stored in memory may only be increased .while codes for binary media were proposed in the 1980s , , within the past few years , a large number of rewriting codes directed at flash memory have been described , , , , , .most of these these codes are designed for flash memory cells that can store one of discrete levels , where the values can only increase on successive rewrites .however , in the physical flash cell , charge is stored during write operations .charge , read as a voltage , is an inherently continuous quantity .commercial flash memory integrated circuits use analog - to - digital conversion , and present bits per cell of digital data externally .currently , any coding , for error - correction and rewriting , must operate on these discrete values .however , one might expect that future coding schemes may have access to the continuous , or analog values stored in the flash memory cells .this paper describes a rewriting code based upon lattices , and assumes that the analog values are available for coding .the values stored in flash cells correspond to lattice points . from a lattice perspective , conventional rewriting codes stores data at the points , in a rectangular lattice. however , rectangular lattices are inefficient , and there exist lattices that have many desirable properties such as better packing efficiency .because the flash cell values are continuous quantities , this paper takes the signal - space viewpoint that has long been used for the awgn channel . among other results, it is now known that lattices can achieve the capacity of the awgn channel , and lattices appear to be a promising practical approach for bandwidth - constrained channels .in fact , a related technique , trellis - coded modulation , has already been considered for error - correction in flash memories . in thispaper , error - correction is not explicitly considered , however it is an important aspect of using lattices in flash memories .an important consideration for both flash memories and awgn channels is the power constraint . for awgn channels ,the average power constraint induces an ideally spherically - shaped codebook , which can be well - approximated by a shaping region equal to the voronoi region of high dimensional lattices .however , encoding in a shaping region requires computationally expensive lattice quantization .but for flash memories , the peak " power constraint is cubic , that is , all points are within the cube , corresponding to the fact that the voltage on each cell has a minimum and maximum possible value .fortunately , lattice quantization is not required . as was shown by sommer , et al ., when the lattice has a triangular generator matrix , there is an efficient encoding which results in a cubic shaping region .this paper presents a slight generalization of this method .the proposed code partitions the signal space into blocks with maximum volume ; these terms will be defined in the next section , but one may assume .some , but not necessarily all blocks , have a one - to - one mapping between information and lattice points contained within that block . when the memory is to be rewritten with new information , a new codeword either within the same block , or in an adjoining block , is selected , such that the cell values are only increased . while there are multiple codeword candidates that can encode the new information , the codeword which maximizes the future number of rewritesis selected .the triangular - generator lattice encoding is linear , but unfortunately the linearity presents a problem . when new data is to be written to the memory , under a linear construction there is exactly one new codepoint nearest the current state codepoint .but if at least one component of the current state is near the boundary , then with some probability , the nearest codepoint " will be a phantom , outside of the power constraint. it might be acceptable to select another , suboptimal codepoint .but because of linearity , all such codepoints are phantoms and unaccessible .accordingly , a random hash " is introduced .this destroys the linearity , and requires a procedure to select the candidate codeword which is most suitable for rewriting .but its purpose is to increase the average number of times the memory may be written .while rewriting codes for flash memories have received some research attention , error - correction coding for flash memories is of considerable practical importance .there have been only a few studies on the dual - purpose codes which can both correct errors and allow rewriting . however , the simple concatenation of a rewriting code and error - correction code appears to be problematic .encoding the rewriting code followed by a systematic error - correction code means that parity bits are not rewritable . on the other hand , switching the concatenation results in no guarantees of minimum distance , since most rewriting codes do not appear to be systematic .however , lattices considered in this paper have a natural error - correction property , due do the euclidean distance that separates the points .while this paper assumes there is no noise , the goal is to show that a rewriting code can be constructed by an appropriate encoding from information to lattice points .an -dimensional lattice is defined by an -by- generator matrix .the lattice consists of the discrete set of points for which where is from the set of all possible integer vectors , .the voronoi region is region of which is closer to than to any other point , and the volume of this region is the determinant of : the entry of is denoted .let be a lattice with a diagonal generator matrix .let be an -cube , given by : for , which has volume .then the codebook of the proposed code is : the lattice generator is lower triangular , and the diagonal entries satisfy the condition that is an integer . the cube is partitioned into blocks .the blocks are indexed by , given by : each block is given by the set of such that : for .if is an integer , then then each block is an -cube with volume .however , is allowed to be non - integer , in which case some blocks sharing a face with will have volume less than .the lattice points inside each block form a subcodebook : the size of the full codebook and the maximum size of any subcodebook are respectively . within each block with volume , there is a one - to - one mapping from information to subcodewords , thus the rate of the code , expressed in information bits per cell is : if is used . also , there is a one - to - many , in particular , a one - to- mapping between information and the full codebook .the encoding is as follows , and is illustrated in fig .[ fig : fig1 ] for .a random `` hash '' maps information to hashed sequences .this hash depends upon : a simple hash is simply to add a constant modulo : where is a hash vector for block .these symbols are then encoded to lattice points as where .the encoding for any block is as follows . in general, is not in .instead , the encoding finds with such that is in the cube .because the generator matrix is diagonal , the can be found by solving the inequality : for , which is unique .first , then are found in sequence . in particular : where computation at step depends upon . also , the data range depends on , that is .now , consider that the current state of the memory is .given an information sequence , or its hash , there may be many candidate codewords .for any codeword , all components of must be positive .let $ ] denote the codeword in corresponding to .since there is no a priori knowledge about future data sequences , it is reasonable that the codeword choice should maximize the number of codeword points that remain available " to future writes , that is , the number of codewords in the positive direction should be maximized .while it is computationally difficult to count these points , a reasonable approximation is the volume that remains after the point is written .this argument bears some resemblance to the continuous approximation used in channel coding using lattices .in particular , if is to be written , then the remaining volume is : and the encoder should write : ) \geq { \mathbf}0 } \prod_{i=1}^n \big ( m\cdot d - x_i\big ) .\end{aligned}\ ] ] this maximization is computationally complex as the lattice dimension increases .generally , however there will be a codeword in a neighboring block .thus , the search can be performed not over all , but only over those positive neighbors of the block that contains the current state .this results in complexity proportional to .decoding is straightforward .if noise is present , then lattice decoding should be performed , to obtain the estimated lattice point .the encoded integers are simply , and from these , is obtained as : the information is obtained by reversing the hash function : where is defined as before . and code rate . ,in order to make a fair normalization in the absence of noise , the scale of the lattice must be selected .proper scaling of the lattice for comparison of coding gain is not clear , although previous work on channel coding used a fixed volume of the voronoi region ( see , for example , ) . for conventional -ary rewriting codes, the rectangular lattice with integer spacing applies ; the volume of the voronoi region of this lattice is 1 .that is , a scalar is selected such that : latexmath:[\ ] ] it has a lower - triangular form , and so it is suitable for the proposed construction .naturally , there is a tradeoff between code rate and the average number of writes , and this is demonstrated in fig .[ fig : fig2 ] , obtained by computer simulation .values of were fixed , with .the code rate , and was allowed to be a non - integer .the most striking feature is that the number of writes depends strongly upon .also , while not shown here , it was observed numerically that the average number of writes increased roughly linearly in , much as the minimum number of writes is also linear in .note that many conventional -ary rewriting codes allow rewriting one _ bit _ at a time .for this lattice - based code , the entire _ word _ is re - written .this paper has demonstrated that rewriting codes based upon lattices is feasible .state - of - the - art has flash chips provide digital data to the external interface , but for lattices to be applicable , the analog values should be accessible .one of the goals of this work is to show the benefits of integrating the analog signal processing and coding in flash memories .lattices have an inherent error - correction property , and they appear to be suitable for both error correction and rewriting .in fact , the equal - voronoi - volume assumption substantially favors lattices with regard to error correction , since it is well known that increasing the dimension leads to substantial error - correction coding gain .a point to note is that the rewriting capability of lattices presented in this paper does not appear to substantially depend upon the dimension .that is , the minimum number of writes is , and there is a well - defined relationship between , , and .however , this appears to not be surprising . in 1984 ,fiat and shamir , working with very general memory models , those based upon directed acyclic graphs ( dag ) , observed : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the significant improvement in memory capability is linear with the dag depth . for a fixed number of statesa `` deep and narrow '' dag cell is always preferable to a `` shallow and wide '' dag cell . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the lattice - based construction does have one weakness when the dimension is small . while the conventional -ary construction can write the maximum value of in all cells , this is not possible using lattices , and leads to a slight loss of capability . note in fig . [fig : fig1 ] that there are some lattice points on the boundary which can not be assigned to a subcodebook .however , this loss can be readily recovered by the superior packing density of lattices , obtained as the dimension increases .low - density lattices codes are high - dimensional lattices which can approach the asymptotic bounds for _ coding _ gain .these lattices are highly suitable for coding for flash , because some such lattices have a triangular generator matrix , suitable for rectangular shaping .their belief - propagation decoding algorithm appears suitable for decoding in the presence of noise , including some reduced - complexity decoding algorithms .a. jiang , v. bohossian , and j. bruck , `` floating codes for joint information storage in write asymmetric memories , '' in _ information theory , 2007 .isit 2007 .ieee international symposium on _ , pp .1166 1170 , june 2007 .e. yaakobi , a. vardy , p. h. siegel , and j. k. wolf , `` multidimensional flash codes , '' in _proceedings 46th annual allerton conference on communication , control , and computing _ , ( monticello , il , usa ) , pp . 392399 ,september 2008 .h. finucane , z. liu , and m. mitzenmacher , `` designing floating codes for expected performance , '' in _proceedings 46th annual allerton conference on communication , control , and computing _ , ( monticello , il , usa ) , september 2008 .h. mahdavifar , p. siegel , a. vardy , j. wolf , and e. yaakobi , `` a nearly optimal construction of flash codes , '' in _ information theory , 2009 .isit 2009 .ieee international symposium on _ , pp .1239 1243 , june - july 2009 .a. jiang and j. bruck , `` information representation and coding for flash memories , '' in _ communications , computers and signal processing , 2009 .pacrim 2009 .ieee pacific rim conference on _ , pp .920 925 , august 2009 .f. sun , s. devarajan , k. rose , and t. zhang , `` design of on - chip error correction systems for multilevel nor and nand flash memories , '' _ circuits , devices systems , iet _ , vol . 1 , pp .241 249 , june 2007 .r. micheloni , r. ravasio , a. marelli , e. alice , v. altieri , a. bovino , l. crippa , e. di martino , l. donofrio , a. gambardella , e. grillea , g. guerra , d. kim , c. missiroli , i. motta , a. prisco , g. ragone , m. romano , m. sangalli , p. sauro , m. scotti , and s. won , `` a 4 gb 2b / cell nand flash memory with embedded 5b bch ecc for 36mb / s system read throughput , '' in _ solid - state circuits conference , 2006 .isscc 2006 .digest of technical papers .ieee international _ , pp .497 506 , february 2006 .b. kurkoski , k. yamaguchi , and k. kobayashi , `` single - gaussian messages and noise thresholds for decoding low - density lattice codes , '' in _ proceedings of ieee international symposium on information theory _ ,( seoul , korea ) , pp .734738 , ieee , june - july 2009 .
|
a rewriting code construction for flash memories based upon lattices is described . the values stored in flash cells correspond to lattice points . this construction encodes information to lattice points in such a way that data can be written to the memory multiple times without decreasing the cell values . the construction partitions the flash memory s cubic signal space into blocks . the minimum number of writes is shown to be linear in one of the code parameters . an example using the e8 lattice is given , with numerical results .
|
large scale power outages or blackouts typically lead to millions of dollars in losses for industry , commercial and residential customers .these power outages can be caused by human error , equipment failure , or may result from natural disasters such as the blackout induced by hurricane sandy in the northeast of the u.s . in 2011 causing power shut off for 8 million customers for days and weeks with estimated damages around 50 billion u.s .dollars .a microgrid is designed to be interconnected with a medium voltage network under normal conditions , and to serve as a stable backup resource in case of isolation or islanding from the transmission grid ( emergency operation mode) .forced isolation may occur in cases of voltage collapse , electric faults , or drops in power quality at the point of common coupling ( pcc ) .if the microgrid is well - designed , the transition to islanded operation should ideally occur smoothly with matching voltage and current phases on the pcc .several authors have considered transition approaches to improve the power quality of the microgrid as it switches to island mode .load scheduling is often a part of a transition approach to ensure power demand can be managed during islanding mode , given the limited power supply and bandwidth of the ( renewable ) distributed energy resources within the microgrid .typically , large electric loads such as electric vehicle charging and household load appliances are considered in load scheduling , see e.g. and . as load scheduling involves optimization to minimize electric energy losses ( i.e. dumping of solar energy for lack of a load to utilize it ) over a finite number of loads , numerical methods that exploit convex optimization routines in mixed - integer linear programming ( milp ) are a promising method for optimal load scheduling .applications of milp in power systems can be seen in a variety of areas such as unit commitment of power production , power distribution network expansion , scheduling of generation units in off - grid conditions in order to maximize supply performance of the system as well as optimal scheduling of a renewable microgrid in an isolated load area .milp is also used in the optimal decentralized energy management problem of a microgrid .based on the computational tools of milp , this paper considers an optimization - based problem for scheduling loads of a group of residential customers , each with rooftop solar pv , that are connected to the distribution network via a single point of common coupling .note that the problem formulation is identical for solar pv array on the roof of an apartment or condominium building , where the solar array could serve the load of the common areas as well as some of the units during a power outage . in the case of a single building the hardware ( microgrid controller ) and governance ( building owner or homeowners association ) issues are much more straightforward than for different buildings in a neighborhood . in the milp formulation , residential participation ( each house with a single meteris considered a load ) is parametrized with a binary or integer component , while the optimization aims to maximize the number of loads that receive power , despite the insufficient pv power generation at all times to meet the electricity demand within the residential microgrid .the optimization is applied to residential grid data obtained from ten residential customers ( houses ) that were selected for analysis from the australian grid from .different operating strategies for the members of the microgrid were considered including isolated self - consumption and several inter - connected sharing strategies to improve the reliability of supply to each residential customer in the microgrid during periods of infrequent and sustained power outages that result in isolation of the microgrid from the electric power network .the milp optimization shows that a majority of the residential customers achieve greater reliability of electricity supply when connecting to the microgrid via `` inter - connected sharing '' in comparison to operating in isolated `` self - consumption '' mode .section [ preliminary ] introduces the dataset used in this work and showcases microgrid simulation results for select houses for a high pv and low pv day . the problem formulation is discussed in section [ problemformulaiton ] with the mathematical optimization problem and the constraints .the simulation results that quantify the benefits of different operating strategies are covered in the section [ results ] and conclusions follow .in this problem we assume a power system with bulk supply production as a generation unit connected to a distribution system via a main circuit breaker ( cb ) at the point of common coupling ( pcc ) to isolate the microgrid of residential customers each with their owns loads and pv system .furthermore , each residential customer ( also indicated exchangably by `` house '' ) has an additional cb referred to in figure [ graphtheory ] as , where is the house index . in cases of power outages ( blackouts ) , faults or power quality disruption from the main power supply , the main cb at the pcc opens and , in addition , a certain set of houses decides to isolate itself to create a modified microgrid of residential customers . the decision making process that decides which houses to ( dis)connect is managed by an optimization problem that operates at a time step of 30 minutes .we select a subset of a dataset with 300 de - identified residential customers with pv in a distribution network in australia .the optimization problem considers one year to cover many load and pv scenarios that may occur within a year for ten houses .the pv systems vary in size .the total pv rated capacity is 17.33 kw .daily peak solar power averages around 11 kw .the corresponding daytime load peak is around 8 kw with a higher peak in the evening that reaches 13 kw . for model validation, the first ten houses were selected with customers i d [ 2 13 14 20 33 35 38 39 56 69 ] for july 1 , 2010 through june 30 , 2011 defined in .the two main microgrid operational modes considered in this case study are `` isolated self - consumption '' and `` inter - connected sharing mode '' .isolated self - consumption is the case where each house has been disconnected and can only be supply its loads from its own solar power . if the load is higher than the solar power at any time step , then no power is supplied and the solar power is considered to be lost . in inter - connected sharing mode , houses will exchange pv power to supply their electric loads . within the inter - connected sharing mode ,three different sub strategies are investigated : * strategy a is to maximize the number of customers to be supplied .* strategy b is to maximize the time duration of supplied load or what will be referred to as number of switches .* strategy c is to minimize losses due to unutilized solar energy .additional constraints are added for all strategies and include a minimum up - time and down - time for a supplied load event .additional conditions will be included such as a `` fairness weighting matrix '' where customers are prioritized based on certain criteria . in this case studywe use the percentage of pv self - generation with respect to the load of each house as a `` fairness weighting '' .two sample days illustrate extremes in potential for solar energy to power the microgrid ( figure [ samplesdays ] ) .these results are presented here to guide the problem formulation . in summer ,solar generation is high compared to load and on this specific day it happens to exceeds load at the solar peak . on the other hand ,higher loads in winter correlate with low solar generation .the load supplied by solar is what solar was able to supply for each house in the isolation mode where the total load was not met by solar for the whole system .figure [ samplehouses ] shows the consumption from three select houses and solar generation behavior for inter - connected sharing and isolated self - consumption operational mode . for this case ,the minimum up - time and down - time are 1.5 h ( 3 time steps ) .the isolated self - consumption operational mode does not allow solar generation to supply any load unless the solar generation exceeds the load for the minimum up - time constraint .furthermore , if the house is `` turned off '' ( disconnected ) , it has to be off for minimum down - time .the minimum up - time effect occurs in some houses in figure [ samplehouses ] : for example house # 2 around noon and around 2 pm where pv generation was temporarily higher than load , but no load was supplied to the house due to the minimum up - time constraint .self - consumption is therefore only attractive if the size of the pv associated with each house is large relative to the consumption , but most houses , at least in winter , do not get any power from their local pv generation and all solar generation is lost . on the other hand , the inter - connected sharing operation mode can aggregate all pv generation to serve more customers . allocations of energy to particular houses must be determined based on an optimization to maximize solar energy utilization and/or customer supply .figure [ samplehouses ] shows that some houses , at least during parts of the day , enjoyed load supply even though load exceeded pv self - generation .more quantitative results will be shown and discussed in section [ results ] .the optimization problem for `` inter - connected sharing mode '' to address the three different sub strategies a , b and c defined in section [ preliminary ] considered in this paper can be presented in the general format of where , and denote the mathematical formulations of the objective function , inequality constraints , and equality constraints respectively .the binary decision variable ^t ] where is the column load vector of house during a day .the notation is used to show the hadamard ( element - wise ) product of the two matrices and . by defining as a new constraint, the objective function can be updated as, the first objective function considers the number of switches which determines how many houses are supplied with power .multiplying by a column vector of ones from both sides is another way of representing the sum over all houses of the sum of switching events for each house .this ensues that the number of houses who receive power at some point is maximized follows a utilitarian philosophy , but in doing so does not maximize the solar energy utilization . aims to increase the energy supplied for the whole microgrid and it is represented as in by introducing a new variable .the substitution by in the objective function motivates the following constraint of the supplied load matrix : to prevent frequency issues , the maximum total load that the microgrid can supply must be less than the total pv energy available at each time interval : where ^t$ ] is the pv generation matrix . to avoid damage to load units and inconvenience to residents because of frequent start - ups and shut - downs , a set of constraintsare defined to guarantee that the unit is switched on ( off ) for a at least ( ) time steps before it is switched off ( on ) .these constraints are called minimum up ( down ) time and are defined as : where the matrix and are denoted as start - up and shut - down matrices respectively , and their elements are defined as : the following constraint guarantees that each house is connected to the grid at least one minimum up - time . all the possible objective functions and constraints in through are linear .thus the optimization problem is convex and can be solved via mixed - integer linear programming ( milp ) tools such as gurobi using cvx . thereexist many mature milp solvers which are capable of solving large - scale milp problems with millions of variables within a reasonable time frame .there are three main operational strategies for inter - connected sharing and each strategy is associated with two sub - strategies . in strategya , the goal is to maximize the objective function with all constraints - , which maximizes the number of households whose load is served at some point .this forces all houses to receive power for at least one minimum up - time .strategy b also maximizes the objective function while the minimum daily connection constraint in is neglected .the goal is to increase the number of switches without enforcing that all houses receive power at least once .strategy c maximizes solar energy utilization by maximizing the objective function considering all constraints other than .solar energy is distributed in every possible way to reduce any losses even if it means that more houses never receive power .the sub operation strategies a+ , b+ and c+ are exactly the same as strategies a , b and c respectively with the following modifications to the objective functions : where is a `` fairness weighting '' matrix of the same size of and .the weighting matrix introduces preferential weighting for certain houses to receive power even though it deviates from the solutions for and .the weights would be set by a governing entity based on perceived fairness criteria such as prioritizing houses with larger pv generation , lower load demand , or prioritizing critical loads ( e.g. medical needs ) either permanently or during a given time span .the weights could also be based on a market where individual homes pay to receive priority for load .for illustrative purposes , the ratio of pv generation divided by the total load is used as weighting function in here .the six strategies are compared to the self - consumption strategy .this strategy can be modeled by solving the same optimization problem as in strategy c for each house where the number of houses in the problem is equal to . to compare the simulation results of different objective functions with the isolated self - consumption operational mode ,two indices are defined in this paper .the first index is the percentage of supplied load , defined as below , where is defined as element - wise division .since and are of size ( n , t ) multiplying by vector of size ( t,1 ) results in ( n,1 ) which is the sum of supplied load for each house for a specific day .the percentage of load met is the ratio of supplied load divided by the total load for a specific day .the other index is the percentage of pv utilization of a given day , which is determined as follows , and reports what portion of individual houses solar generation is utilized by each house for the isolated self - consumption operational mode and what portion of total solar generation in microgrid is utilized in for the inter - connected sharing strategies .figure [ onedayresults ] shows aggregate results for all 10 houses over the course of two sample days , based on self - consumption with the three sharing strategies a , b and c defined in section [ operationstrategies ] . on the summer day around 20% of loads are met for all house and all house are powered at least once during the day .the january 9 results are representative for summer days when the peak of the aggregate solar generation is higher than the total load and therefore all houses can be powered during that time , independent of strategy .some houses did not benefit as much as others from energy sharing , for example house # 3 , 8 , and 10 benefit the most while houses # 4 or 6 only receive marginally more energy compared to self - consumption .house # 10 was not able to self - consume any time during the day as the load was always higher than the solar generation . comparing the inter - connected sharing operational modes a and b resulted in identical results for 6 out of 10 houses . in 3 housesstrategy c met less load compared to a and b , while 2 houses had higher load met .little to no change is expected since constraint ( 9 ) is satisfied for both strategies through the solar energy excess at midday .strategy c results were also mostly similar to a and b , but only three houses ( # 1 , 5 , 10 ) benefited , while two houses ( # 3 , 4 ) lost a substantial amount of energy .july 5 is representative of a winter day with low solar power and high load demand which tends to emphasize differences between the strategies .gains from sharing compared to self - consuming were larger : all 10 houses were powered with energy sharing while only 4 houses received some energy from their own solar generation .results for the winter day also vary by operational strategy a , b , and c. it is interesting that only four houses ( # 1 , 2 , 7 , 10 ) benefited from strategy c. considering both days , only house # 10 benefited from strategy c consistently .for these two days , there is no clear winner between the inter - connected sharing operational strategy , but it is clear that sharing energy is advantageous for every house . in the following , the results were analyzed for different months and one year to quantify the performance of each strategy based on certain objective functions .figure [ monthly ] summarizes aggregate monthly average results for isolated self - consumption and the main three inter - connected energy sharing strategies .the seasonal ( summer and winter ) trends are consistent with results for the sample days . during summer time( october through february ) there is a higher percentage of load met due to larger solar generation as well as load behavior . in all months , isolated self - consumption scores the lowest load met percentagewhile strategy c scores the highest with negligible differences between strategies a and b. strategies a+ , b+ , and c+ use the same objective function , but include the weighting that prioritizes additional constraints .the weighting matrix in this case is defined based on the ratio of solar generation to the load .figure [ housesyearly ] presents the annual load met percentage of each house for all six scenarios and the isolated self - consumption .all houses benefit from inter - connected energy sharing .this value varies over the houses but it is consistent that all are better in inter - connected sharing mode . comparing different strategies , seven houses have higher load met percentage in strategy c , two houses have higher load met percentage in strategies a and b , and only one house with all three strategies equal .the weighting strategies follow the pattern of the original ones for all houses .house # 6 and 7 stand out , where in house # 6 c+ strategy resulted in higher average load met percentage over a year opposite to house # 7 .three houses benefit from applying the fairness weighting matrix while three are not affected , and the four others suffered . among the three main strategies , c and c+ combined for the highest percentage of load met with seven out of ten houses ( four and three houses respectively ) .h & days & .# of houses supplied for different strategies . [ cols="^ " , ] table [ objfuncomp ] summarizes the performance of both operational mode with all strategies for one year .isolated self - consumption is the worst operational mode in terms of load met percentages and pv utilization where it scored 65% less than the best strategy . in terms of average number of houses to be suppliedisolated self - consumption also scored the lowest .c+ and c strategies differ by less than 0.2% and score the highest percentage of load met and pv utilization . in general, a strategy and its weighted version ( for example c and c+ ) are expected to yield similar results , since the weighted strategy only changes the priority of which house is supplied . moreover ,the computational time for the optimization using matlab and cvx ( gurobi solver ) was less than a second performed in a 3.4 ghz intel core i7 processor with 32 gb of ram .in this paper we propose optimization - based residential customer scheduling to improve the reliability of electricity supply for residential customers during islanded microgrid operation .each residential customer owns rooftop pv that can be used to supply just their own load or be shared across the microgrid to satisfy different operational strategies . in the latter case , residential customer scheduling is based on mixed - integer linear programming in which integers are used to parametrize the power status of each house and linear constraints enforce minimum up - time and down - time of power provision .the different operating strategies to distribute pv energy across the members of the microgrid include different objective functions which focus on the optimal use of solar pv within a microgrid : a ) forcing all houses to receive power at least once , b ) maximizing the number of switches without forcing all houses to be connected , c ) maximizing the utilization of available solar power distributed among the grid to reduce power losses .additional strategies were considered which used a priority or fairness weighting matrix to determine scheduling .the weighting matrix was computed by considering the load - to - generation ratio for each house , but other weighting based on priority of the loads in each house can be considered . a case study based on historical yearly data for ten houseswas conducted .the mixed - integer linear programming results show that isolated self - consumption operation was the worst option for all houses .the objective which maximized the use of available solar power resulted in the highest percentage load met .although results vary for each house , the trends over the year are consistent .future work will include backup generation such as storage and distributed energy resources .j. c. vasquez , j. m. guerrero , j. miret , m. castilla , d. vicuna , and l. garca , `` hierarchical control of intelligent microgrids , '' _ industrial electronics magazine , ieee _ , vol . 4 , no . 4 , pp .2329 , 2010 .k. samarakoon , j. ekanayake , and n. jenkins , `` investigation of domestic load control to provide primary frequency response using smart meters , '' _ smart grid , ieee transactions on _ , vol . 3 , no . 1 ,pp . 282292 , 2012 .a. viana and j. p. pedroso , `` a new milp - based approach for unit commitment in power production planning , '' _ international journal of electrical power & energy systems _ , vol .44 , no . 1 ,pp . 9971005 , 2013 .l. bahiense , g. c. oliveira , m. pereira , and s. granville , `` a mixed integer disjunctive model for transmission network expansion , '' _ power systems , ieee transactions on _ , vol . 16 , no . 3 , pp .560565 , 2001 .h. zhang , v. vittal , g. t. heydt , and j. quintero , `` a mixed - integer linear programming approach for multi - stage security - constrained transmission expansion planning , '' _ power systems , ieee transactions on _ , vol . 27 , no . 2 , pp . 11251133 , 2012 .h. morais , p. kadar , p. faria , z. a. vale , and h. khodr , `` optimal scheduling of a renewable micro - grid in an isolated load area using mixed - integer linear programming , '' _ renewable energy _ , vol .35 , no . 1 , pp .151156 , 2010 .chung , w. liu , d. a. cartes , e. g. collins , jr .moon , `` control methods of inverter - interfaced distributed generators in a microgrid system , '' _ ieee trans . ind ._ , vol .46 , no . 3 , pp .10781088 , may / june 2010 .d. ioli , a. falsone , and m. prandini , `` an iterative scheme to hierarchically structured optimal energy management of a microgrid , '' in _ decision and control ( cdc ) , 2015 ieee 54th annual conference on_.1em plus 0.5em minus 0.4emieee , 2015 , pp .52275232 .e. l. ratnam , s. r. weller , c. m. kellett , and a. t. murray , `` residential load and rooftop pv generation : an australian distribution network dataset , '' _ international journal of sustainable energy _ , pp . 120 , 2015 .p. bonami , m. kilin , and j. linderoth , `` algorithms and software for convex mixed integer nonlinear programs , '' in _mixed integer nonlinear programming_.1em plus 0.5em minus 0.4emspringer , 2012 , pp .
|
despite the recent rapid adoption of rooftop solar pv for residential customers , islanded operation during grid outages remains elusive for most pv owners . in this paper we consider approaches to improve the reliability of electricity supply in the context of a residential microgrid , consisting of a group of residential customers each with rooftop solar pv , that are connected to the distribution network via a single point of common coupling . it is assumed that there is insufficient pv generation at all times to meet the electricity demand within the residential microgrid . three optimization - based algorithms are proposed to improve the reliability of electricity supply to each residential customer , despite variability and intermittency of the solar resource and periods of infrequent and sustained power outages in the electricity grid . by means of a case study we show that the majority of residential customers achieve greater reliability of uninterrupted electricity supply when connecting to the residential microgrid in comparison to operating in isolated self - consumption mode .
|
the dynamical features of a network of coupled oscillators provide a useful paradigm for understanding such diverse processes as epidemic spreading , traffic congestion and the general phenomenon of synchronization .synchronization is ubiquitous in nature as well as in experimental systems studied in various branches of science and engineering .the presence of synchronization and its effects have been observed in power grids , communication networks , social interactions , circadian rhythms , and ecology .one of the important factors that influences synchronization behavior is the topology defining the pattern of connectivity of the underlying network .the topology can also have an effect on the stability of the synchronized state . among the large number of studies that have been carried out on synchronization in networks ,the phenomenon of ` explosive ' synchronization has received important recent attention .the onset of such a synchronization is characterized by a discontinuous jump in the order parameter signifying a first order phase transition instead of the widely observed second order phase transition in the classical system of coupled phase oscillators introduced by kuramoto .the simple kuramoto model of coupled phase oscillators neglects two important physical effects , namely , inertia in the intrinsic dynamics of the oscillators and finite time delay in the coupling mechanism .inertia introduces a second order time derivative in the dynamical equation of the individual oscillators and a coupled system of such oscillators is known to display distinctly different behavior from the usual first order kuramoto system .for example , as shown by tanaka , the synchronization transition in a globally coupled network of kuramoto oscillators with inertia is no longer second order but displays the characteristics of a first order phase transition .the presence of time delay in the coupling that arises naturally in any physical system due to the finite propagation speed of signals , also has a profound effect on the collective dynamics of the oscillator system . in the present workour objective is to study the combined effect of inertia and time delay on the synchronization dynamics of a coupled system of kuramoto oscillators in a scale - free network . to model the scale free network we choose a star network as an approximation or as representative of the smallest unit of a scale - free network .we also assume a direct correlation between the natural frequencies of the oscillators and their degrees . in the absence of time delay the star network exhibits a discontinuous transition of synchronization as expected of a scale - free network .the inclusion of time delay brings about a rich variety in the collective behavior of this system .first of all it introduces multiple synchronous states with attendant multi - stability in the network .thus for a given coupling strength more than one stable state is now possible .a given synchronous state also exhibits frequency suppression as a function of the time delay .each value of the time delay therefore presents a different synchronization transition in the star network , and by tuning the time delay we find that a desired phase transition can also be obtained .the time delay dependence of the average frequency is the key factor that leads to different synchronization transitions .we obtain our results from extensive numerical simulations carried out over a large range of parameters by using the package xppaut .we also provide analytical predictions for the time delay dependence of the average frequency of the system that are in good agreement with the simulation results .our model system consists of a star network that links second order kuramoto phase oscillators through time delayed couplings .the model equations take the form , , \label{km2}\ ] ] where are the phases of the oscillators , is the mass , is the homogeneous coupling strength , is the time delay in the coupling and is the natural frequency of the oscillator .the frequencies are selected from a given probability density . are the elements of the adjacency matrix of connectivity of the network such that if two nodes and are connected then , otherwise . to track the synchronization transitioneffectively it is useful to look at the behavior of an order parameter defined as , where represents an average phase of the collective dynamics of the system and the parameter provides a measure of the coherence of the collective motion of the oscillators or in other words the degree of synchronization among the oscillators . when the system is fully synchronized , while denotes total incoherence .[ cols="^,^,^,^ " , ] in order to locally investigate the effect of time - delay in a scale - free network , we consider an undirected star network .a scale - free network whose average degree is small can be represented as a collection of star networks because hubs would be more dominant .therefore , here the hubs are represented as star motifs or networks . in the star network a central hub is connected to leaves or peripheral nodes .therefore , in a star network of nodes , the central hub has degree and each leaf or peripheral node has degree .the natural frequency of each node is set equal to its respective degree , i.e. , for the hub and for the peripheral nodes .we move to a rotating frame of the average phase with frequency . in this frame the new variables for the hub and the leavesare then defined as and , respectively .using these definitions we can rewrite eq.([km2 ] ) for the hub and the leaves separately as , , \label{km2_hub}\ ] ] , \label{km2_leaf}\ ] ] where the parameter is a function of the average frequency of the system .the global order parameter defined in eq.([opar ] ) can be rewritten for the star network as following multiplying both sides of eq.([op_star ] ) by , taking the imaginary part , and writing in terms of phases and , eq.([km2_hub ] ) can then be rewritten as in the phase - locked condition all time derivatives vanish .hence for the hub , which gives us , imposing a similar condition for the phase - locked peripheral nodes or leaves , i.e. , , eq.([km2_leaf ] ) reduces to =\frac{(c-\omega_l)}{\lambda}\sin\phi_h(t-\tau ) \pm\frac{1}{\lambda}\sqrt{[\lambda^2-(\omega_l - c)^2][1-\sin^2\phi_h(t-\tau)]}. \label{res_leaf}\end{aligned}\ ] ] at the critical coupling , from eq .( [ res_leaf ] ) we obtain =\sin\phi_h(t-\tau)$ ] , which further leads to . therefore , we can set the time delay in order to obtain different phase couplings between the hub and the peripheral nodes .eq.([res_leaf ] ) is valid only for , so that the values of the critical coupling for the existence of synchronized state would be determined by .therefore , is a necessary condition for the existence of the synchronous solutions .since the value of the average frequency depends on the time delay , i.e. , the value of the critical coupling also depends on the time delay . for above model system exhibits discontinuous synchronization transition as shown in fig.[fig : star_size](a ) .the order parameter is seen to make a discontinuous jump at a critical value of the coupling strength .this behavior is seen for different sizes of the network ranging over and , and it is observed that the critical value of the coupling strength increases with the size of the network . this is understandable since for a larger size of the network a larger number of oscillators have to be entrained to obtain synchronous behavior .we should also mention that the model star graph studied here should be viewed as an approximation to a larger scale - free network with a distribution of hubs some of which would have a large number of links .the single hub model is representative of one such hub and the values of chosen are representative of the expected degrees for such a hub in a realistic scale - free network with to nodes . in fig.[fig : star_size](b ) we show the effect of inertia ( still with ) on the critical coupling for synchronization for a given network size ( ) by varying the value of the mass .as can be seen an increase in increases the threshold value for the onset of synchronization .however the nature of synchronization , namely a first order phase transition , does not change . as a function of coupling strength with different values of time - delay introduced in a star network of leaves with . for different values of time delay, the star network represents different synchronization transition.,title="fig:",width=283,height=226 ] + we next look at the behavior of the system in the presence of time delay .we find that for finite values of time delay , the synchronization behavior of the system undergoes significant changes and this is shown in fig.[fig : star_td ] where the order parameter is plotted against for a fixed value of and .we observe that the presence of a time delay can lower the threshold for phase transition to a synchronous state compared to that in the absence of time delay .for instance , the value of critical coupling for is now lowered to when .fig.[fig : star_td ] also demonstrates that each value of the time delay gives rise to a different synchronization transition , namely a first order or second order phase transition .for example , for a first order , and for a second order phase transition is obtained .such interesting behavior results from a dependence of the average frequency or the parameter on the time delay , i.e. , .since the critical coupling depends on the parameter , therefore it is fruitful to analyze how the parameter varies with the time delay . in fig.[fig : omg_tau ] the average frequency is shown as a function of time delay .one notes two important characteristic features about the dependence of on .starting from as one increases the average frequency starts decreasing displaying the so - called frequency suppression phenomenon that has been noted before for time delayed systems . beyond a certain value of the frequency jumps to a higher value and continues to decrease again with along another branch .this behaviour is repeated as one moves along to higher values of - indicating an oscillatory pattern .this oscillatory dependence between and accounts for the different behavior of the synchronization transition exhibited by the star network ( see fig . [fig : star_td ] ) in contrast to the case without time delay where one only observes a first order transition ( see fig . [fig : star_size ] ) .( or ) as a function of the time delay for the coupling strength in a star network of leaves .the circles represent the simulation result , and solid lines represent the theoretical prediction by eq.([fq_del ] ) while considering .there is an oscillatory dependence of on the time delay .,title="fig:",width=283,height=245 ] + for a given oscillatory dependence between and , we can choose the time delay which makes the system prone to the onset of synchronization .for a star network of given size , we can find out a value for the time delay that enhances the synchronization transition level .a value of the time delay can be determined which would produce the maximum value of the critical coupling . since we know the critical coupling , therefore the lower differences between and yields lower value of , which enhances the onset of synchronization . by contrast , higher frequency differences would yield higher value of and disturb the onset of synchronization .these cases are supported by synchronization diagram in fig.[fig : omgtau_cmp ] for the values of the time delay chosen according to the oscillatory dependence between and from fig .[ fig : omg_tau ] . for instance , for the time delay of , the difference is higher giving rise to a first order transition .however , for a time delay of , the difference is low which leads to a second order transition .it is found that a first order transition is observed if the time delay is set to obtain a higher value of the critical coupling . as a function of coupling strength for the time - delay and present in a star network of nodes.,title="fig:",width=283,height=226 ] + in the synchronized state, there is an abundance of synchronized nodes which are locked to the mean field that rotates with a constant frequency , so that . in order to analyze the parameter ,we have assumed that after transition to synchrony all nodes in the star network are evolving with the same average phase , i.e. , for the hub and ( ) for the peripheral nodes . substituting these solutions in eqs .( [ km2_hub ] ) and ( [ km2_leaf ] ) and writing a set of equations , we obtain after summing up the above set of equations for the hub and peripheral nodes , we obtain the parameter of the star network given by .\label{fq_del}\ ] ] eq.([fq_del ] ) is similar to the relation between the average frequency and the time delay of the star network of first order kuramoto oscillators in presence of the time delay .therefore , in our system as well we expect a similar kind of oscillatory dependence between and as seen in the case of ref . . from ref . , the necessary condition for the stability of the synchronous states ( ) is given by .thus , the product must satisfy for .therefore , product should be replaced by ( ) in eq .( [ fq_del ] ) . thus eq .( [ fq_del ] ) when ) is the analytical prediction for the average frequency and the time delay for the star network . in fig .[ fig : omg_tau ] , eq .( [ fq_del ] ) is plotted for , and for a star network of . from fig .[ fig : omg_tau ] it is obvious that there is a good agreement between the simulation results and the analytical predictions made by eq .( [ fq_del ] ) .the critical coupling for the star network can also be determined by rewriting the eq .( [ fq_del ] ) in terms of and using the expression and considering .to summarize , in this paper we have studied the synchronization behaviour of a scale - free network of coupled oscillators in the presence of inertia in the system and time delay in the coupling .the presence of inertia is modelled by using second order kuramoto oscillators and the scale - free network is approximated by a star network and by setting the frequency of each node equal to its degree of connection . in the absence of time delay the network shows the onset of first order transitions to synchronized behaviour and the threshold for this transition increases with an increase in inertia .the presence of time delay can mitigate to some extent the influence of inertia by lowering the threshold for synchronization .in addition time delay can also influence the nature of the synchronization transition by making it switch from a first order to a second order transition .the mechanism underlying this change is associated with the change in the average frequency of the system as a function of time delay .thus time delay and inertia provide us with parametric handles that can be used to control the onset of synchronization in a scale - free network .time delay offers a further facility of changing the nature of the transition from a first order to a second order synchronization .our model calculations based on the star network agree very well with the numerical simulation results .since both inertia and time delay are likely to be present in any realistic physical network our results can help in understanding the microscopic nature of synchronization phenomena in them and also provide a means of controlling the nature and onset of such synchronizations .
|
we examine the onset of synchronization transition in a star network of kuramoto phase oscillators in the presence of inertia and a time delay in the coupling . a direct correlation between the natural frequencies of the oscillators and their degrees is assumed . the presence of time delay is seen to enhance the onset of first order synchronization . the star network also exhibits different synchronization transitions depending on the value of time delay . an analytical prediction to observe the effect of the time delay is provided and further supported by simulation results . our findings may help provide valuable insights into the understanding of mechanisms that lead to synchronization on complex networks .
|
analysis of the increasingly available genomic data continue to reveal the extent of hybridization and its importance in the speciation and evolutionary innovations of several groups of species and animals . when hybridization occurs , the evolutionary history of the species and their genomes is _ reticulate _ and best modeled by a _ phylogenetic network _ which , in our context , is a special type of rooted , directed , acyclic graphs .methods have been devised for inferring phylogenetic networks from pairs of gene trees ( e.g. , ) , larger collections of gene trees , and directly from sequence data ( e.g. , ) .a salient feature of all these methods is that the incongruence of gene tree topologies , and more generally the heterogeneity among the different loci , is caused solely by reticulate evolutionary events such as horizontal gene transfer or hybridization .while hybridization causes incongruence among gene trees , other evolutionary events can also result in incongruence , such as incomplete lineage sorting ( ils ) and gene duplication / loss .in particular , as ( successful ) hybridization occurs between closely related species , it is important to account simultaneously for incomplete lineage sorting , a phenomenon that arises in similar situations . while a wide array of methods have been devised for inference under ils along( see for recent surveys ) , it is important to integrate both hybridization and ils into a single framework for inference . needless to say , it is important to integrate all sources of incongruence into a single framework , but that is much beyond the scope of this paper .the main task , then , becomes : given a gene tree topology and a phylogenetic network , to reconcile the gene tree within the branches of the phylogenetic network , thus allowing simultaneously for hybridization and ils .when a method for achieving this task is wrapped " by a strategy for searching the phylogenetic network space , the result is a method for inferring reticulate evolutionary histories in the presence of both hybridization and ils .therefore , is is very important to solve the reconciliation problem .indeed , in the last five years , several attempts have been made , following different approaches , to address the problem of inferring hybridization in the presence of ils .however , due to the computational challenges of the problem , these methods focused on very limited cases : fewer than 5 taxa , one or two hybridization events , and a single allele sampled per species .more recently , our group proposed two methods for detecting hybridization in the presence of incomplete lineage sorting , including a probabilistic method which computes the probability of gene tree topologies given a phylogenetic network and a parsimony method which computes the minimum number of extra lineages required to reconcile a gene tree within the branches of a phylogenetic network . while these methods are general in terms of the topologies and sizes of gene trees and phylogenetic networks , they are computationally intensive . in particular , these methods convert a phylogenetic network to a special type of trees , called _ multil - labeled trees _( mul - trees ) , and conduct computation on these trees while accounting for every possible mapping of genes to their leaves .this computation can be exponential in the number of leaves , and does explicit computations of coalescent histories of the gene genealogies . in this paper , we propose a novel way of computing the probability of gene tree topologies given a phylogenetic network , and a novel way of computing the minimum number of extra lineages of a gene tree and a phylogenetic network .both of them use the concept of _ ancestral configuration _ ( or ac ) which was introduced very recently for computing the probability of gene tree topologies given a species tree .the new algorithms are exact and much more efficient than the two mul - tree based algorithms we introduced in . in our extensive simulation studies ,we compared the running time of the new ac - based methods with the previous mul - tree based ones .we show that the new algorithms can speed up the computation by up to 5 orders of magnitude , thus allowing for the analysis of much larger data sets .furthermore , we discuss how the running time of the new methods is still affected by the topologies of the species networks , more specifically the configurations of reticulation nodes , and the topologies of gene trees .all methods described in this paper have been implemented in the phylonet software package which is freely available for download in open source at http://bioinfo.cs.rice.edu/phylonet .in this work , we assume the following definition of phylogenetic networks . [net - def ] a _phylogenetic -network _ , or -network for short , is an ordered pair , where is a directed , acyclic graph ( dag ) with , where ( 1 ) ( is the _ root _ of ) ; ( 2 ) , and ( are the _ external tree nodes _ , or _ leaves _ , of ) ; ( 3 ) , and ( are the _ internal tree nodes _ of ) ; and , ( 4 ) , and ( are the _ reticulation nodes _ of ) ; are the network s edges , and is the _ leaf - labeling _function , which is a bijection from to . for the probabilistic setting of the problem , we also associate with every pair of reticulation edges inheritance probabilities and such that . inheritance probability indicates the proportion of alleles in population that are inherited from population .gene tree _ is a phylogenetic network with no reticulation nodes .the way in which a gene evolves within the the branches of a phylogenetic network can be described by a _coalescent history _let be a phylogenetic network .we denote by the set of nodes in and by the set of nodes that are reachable from the root of via at least one path that goes through node . given a phylogenetic network and a gene tree , a _ coalescent history _ is a function such that the following two conditions hold : ( 1 ) if is a leaf in , then is the leaf in with the same label ( in the case of multiple alleles , is the leaf in with the label of the species from which the allele labeling leaf in is sampled ) ; and , ( 2 ) if is a node in , then is a node in .see fig .[ fig : coalhis ] in the appendix for an illustration . given a phylogenetic network and a gene tree , we denote by the set of all coalescent histories .then the probability of observing gene tree given phylogenetic network is where is the probability of coalescent history given phylogenetic network ( along with its branch lengths and inheritance probabilities ) .coalescent histories can also be used to compute the minimum number of extra lineages required to reconcile gene tree with , which we denote by , as methods for computing and when is a tree were recently given in and , respectively .recently , we proposed new methods for computing these two quantities when is a phylogenetic network .the basic idea of both of these methods is to convert the phylogenetic network into a _mul - tree _ and then make use of some existing techniques to complete the computation on instead of on .a mul - tree is a tree whose leaves are not uniquely labeled by a set of taxa .therefore , alleles sampled from one species , say , can map to any of the leaves in the mul - tree that are labeled by . for network on taxa ,we denote by the set of alleles sampled from species ( ) , and by the set of leaves in that are labeled by species .then a _ valid allele mapping _ is a function such that if , and , then .[ fig : net2tree ] in the appendix shows an example of converting a phylogenetic network into a mul - tree along with all valid allele mappings when single allele is sampled per species .suppose is the mul - tree converted from network .we denote by the set of all valid allele mappings for mul - tree and gene tree .then the probability of observing gene tree given can be computed using mul - tree as follows where is the set of coalescent histories of within mul - tree under valid allele mapping , and is the probability of observing coalescent history within under .furthermore , the minimum number of extra lineages required to reconcile gene tree with can also be computed using mul - tree by where is the total number of extra lineages of coalescent history within under allele mapping .the advantage of the mul - tree based techniques is that once the network is converted to the mul - tree , tree - based techniques from the multi - species coalescent theory apply with minimal revision . nonetheless , from eq . andeq . we can see that the running time of both two methods depend on the number of valid allele mappings .let be the set of leaves of , and be the number of alleles sampled from some in in gene tree .then the number of valid allele mappings of and is bounded from below and above by and , respectively , where and are the minimum and maximum number of reticulation nodes on any path from leaf in to the root of respectively .we can see that when the number of taxa or sampled alleles increases , or when the number of reticulation nodes increases , this number can quickly become very large which makes the computations prohibitive .furthermore , computing term in eq .using coalescent histories will become infeasible when the number of taxa or sampled alleles increases .central to our methods is the concept of _ ancestral configuration _ ( or simply configuration , or ac ) .when it was first introduced , it was defined on species trees for computing the probability of gene tree topologies . in this work ,we extend it to species networks . given a species network and a gene tree , an ancestral configuration at node of , which we denote by ( the subscript may be omitted when the identity of node is clear from the context ) ,is a set of gene lineages at node under some coalescent history in . the number of gene lineages in configuration is denoted by . for example , given the coalescent history shown in fig .[ fig : coalhis ] , for reticulation node , we have and ; for the root of , we have and .furthermore , we denote by a set of pairs where is a configuration at node of and is the weight of , and by a set of where is a configuration that about to leave branch of and is the weight of .we will discuss how to set / use the weight below .assume and are two gene lineages that meet at some node in a gene tree . when reconciling within the branches of a species network , after they two entered the same branch of , they might or might not have coalesced before leaving that branch , the probability of which depends on the length ( in terms of time ) and width ( in terms of population size ) of that branch. therefore , one configuration entering a branch of might give rise to several different configurations leaving that branch with different probabilities .for example , suppose a gene tree has a subtree ( tree with root , leaf - child of the root , child of the root , and two leaves and that are children of ) . then if a configuration entered a branch of , it could give rise to one of three different configurations leaving that branch , including and .we denote by , for configuration and gene tree , the set of all configurations that might coalesce into with respect to the topology of .we now show how to use configurations to compute and efficiently . for a configuration ,we denote by the minimum total number of extra lineages on all branches that the extant gene lineages in having passed through from time to coalesce into the present gene lineages in . in this method ,weight in corresponds to , where is either where is a node or where is a branch .[ lemma : xlupdate ] let be a configuration entering a branch and be a configuration that coalesced into when leaving . then where is the number of extra lineages on branch .we define a function called * createcacsforxl * which takes a gene tree , a branch of the network and a set of configuration - weight pairs that enter branch , and returns a set of configuration - weight pairs that leave branch .note that although one configuration can coalesce into several different configurations along a branch , under parsimony we only need to keep the one that has the minimum total number of extra lineages .therefore and there is 1 - 1 correspondence between configurations in and configurations in .for a phylogenetic network and a gene tree , the algorithm for computing the minimum number of extra lineages required to reconcile within is shown in alg .[ alg : countxl ] .basically , we traverse the nodes of the network in post - order .for every node we visit , we construct the set of configuration - weight pairs for node based on its type .recall that there are four types of nodes in a phylogenetic network , which are leaves , reticulation nodes , internal tree nodes , and the root .finally when we arrive at the root of , we are able to obtain . at a reticulate node who has parents and ,every gene lineage could independently choose to go toward or .so for every in , there are different ways of splitting into two configurations , say and , such that .for example , a configuration can be split in four different ways including and , and , and , and and .it is important to keep track of those gene lineages that are originally coming from one splitting so that we could merge them back once they are in the same population again .note that there is no need to consider coalescent events on branch or , because all gene lineages on these two branches were already in the same population on branch . and under parsimony where all gene lineages are assumed to coalesce as soon as they can , all possible coalescent events that could happen among these gene lineages must have already been applied on branch . as a result , and can be put directly into and respectively .the _ compatibility _ of configurations in the algorithm is defined as follows .two configurations are compatible if for every reticulation node , either both configurations went through that node and had resulted from the same split of an ancestral configuration , or at least one of the two configurations did not go through that node .[ fig : algoexample ] in the appendix illustrates configurations generated for every node and branch of a network given a gene tree . for a configuration ,we denote by the cumulative probability of the extant gene lineages in coalescing into the present gene lineages in from time . in this method ,weight in corresponds to , where is either where is a node or where is a branch .[ lemma : coalesceprobonbranch ] let be a configuration entering branch of network with branch length . then the probability of observing configuration leaving branch is where is the probability that gene lineages coalesce into gene lineages within time , is the number of ways that coalescent events can occur along branch to coalesce into with respect to the gene tree topology , and is the number of all possible orderings of coalescent events .the details of how to compute , and are given in .[ lemma : probupdate ]let be a configuration entering a branch and be a configuration that coalesced into when leaving .then we define a function called * createcacsforprob * which takes a gene tree , a branch of the network and a set of configuration - weight pairs that enter branch , and returns a set of all possible configuration - weight pairs that leave branch .note that several configurations can coalesce into the same configuration along a branch , but we only need to keep one copy of every distinct configuration . here , we define two configurations to be the _ identical _ if they satisfy the following two conditions : ( 1 ) they contain the same set of gene lineages , and ( 2 ) for every reticulation node in the network , either neither of them contain lineages that have passed through it , or the lineages in these two configurations that passed through it originally came from one splitting at node .the algorithm for calculating the probability of observing a gene tree given a species network is shown in alg .[ alg : calprob ] .the basic idea is similar to the parsimony method we described in the previous section .an illustration is given in fig .[ fig : algoexample ] . at every reticulation node in the species network ,every configuration in is split into two configurations in all possible ways .this may result in multiple pairs in a set where their configurations have the same set of gene lineages but not considered to be the same because they were not originally from one splitting at some reticulation node some lineages in them have passed through .it may increase the number of configurations significantly .it is clear that the running time of both these two algorithms depends on the number of configurations .so in order to reduce the number of configurations so as to speedup the computation , we make use of _ articulation _ nodes in the graph ( an articulation node is a node whose removal disconnects the phylogenetic network ) . obviously , the reticulation nodes inside the sub - network rooted at an articulation node are independent of the reticulation nodes outside the sub - network .so at articulation node we can clear all the information about the splittings at all reticulation nodes under so that all configurations at containing the same set of gene lineages are considered to be the same .more precisely , when traversing the species network , after constructing for some internal tree node as we have described in alg .[ alg : countxl ] and alg .[ alg : calprob ] , if is an articulation node , we clear all the information about splittings at all reticulation nodes in the sub - network rooted at . then for counting the minimum number of extra lineages, we update to be such that only the configuration - weight pair that has the minimum weight is left , using the statement : . and for computing the probability of the topology of a gene tree , we keep only one copy of every distinct configuration in the sense of the set of lineages it contains. more precisely , we update to be using .to study the performance of the two methods compared to the mul - tree based ones , we ran all four on synthetic data generated as follows .we first generated random 24-taxon species trees using phylogen , and from these we generated random species networks with , , , and reticulation nodes .when expanding a species network with reticulation nodes to a species network with reticulation nodes , we randomly selected two existing edges in the species network and connected their midpoints from the higher one to the lower one and then the lower one becomes a new reticulation node .then , we simulated , , , , , and gene trees respectively within the branches of each species network using the ms program . since the mul - tree methods are computationally very intensive , we employed the following strategy : for the parsimony methods , we bounded the time at 24 hours ( that is , killed jobs that did not complete within 24 hours ) . for the probabilistic ones , we bounded the time at 8 hours .all computations were run on a computer with a quad - core intel xeon , 2.83ghz cpu , and 4 gb of ram . for computing the minimum number of extra lineages , the results of the running time of both two methods are shown in fig .[ fig : mdcresult ] .overall , both two methods spent more time on data sets where the species networks contain more reticulation nodes .it is not surprising given the fact that adding more reticulation nodes increases the complexity of the networks in general .we can see that the speedup of the ac - based method over the mul - tree based method also increased when the number of reticulation nodes in the species networks increased .it is up to over orders of magnitude . in this figure , we only plot the results of the computations that could finish in hours across all different number of loci sampled .in fact , the ac based method finished every computation in less than 3 minutes , even for the largest data set which contained species networks with reticulations and gene trees . for the mul - tree based one , out of repetitions the numbers of repetitions that were able to finish in hours across all different loci are , , , and for data sets containing species networks with , , , and reticulation nodes . for computing the probability of the gene tree topologies given a species network , we were not able to run the mul - tree based one because we found it could not finish the computation in hours given even for the smallest data set ( one gene tree and a species network with one reticulation node ) .in contrast , the ac - based method only needed seconds on the same data set which implies a speedup of at least orders of magnitude .part of the results of the ac based algorithm are shown in fig .[ fig : probresult ] .again , only the results of the computations that could finish successfully in 24 hours across all loci were plotted .we can see that the number of data points in the figure decreased significantly when the number of reticulation nodes in the species networks increased .in fact , out of repetitions , the numbers of repetitions that finished the computations successfully across all different loci are , , , and for data sets containing species networks with , , , and reticulation nodes respectively .the number of successful runs is much smaller than that for the parsimony method .furthermore , those computations failed not only because of the 24 hours time limit .part of them are due to memory issues : the number of configurations generated in the computation in order to cover all the possible coalescence patterns that could arise is much more than that needed in the parsimony method . and the increase in the number of reticulation nodes in the species network might result in a very large increase in the number of configurations . from fig .[ fig : mdcresult ] and fig .[ fig : probresult ] we observe that for both methods , the running time differed significantly from one data set to another .there are several factors that can affect the number of configurations generated during the computation which directly dominates the running time of the algorithm .two of the factors that affect performance are the number of leaves under a reticulation node , as well as the topology of the gene tree .we considered a controlled " data set , where we controlled the placement of the reticulation node as well as the shapes of the gene trees . in particular , we considered three networks , each with a single reticulation node , yet with 1 , 8 , and 15 leaves under the reticulation node , respectively ( see fig .[ fig : analysis1 ] in the appendix ) .further , we considered two gene trees : , whose topology is contained " with each of the three networks , and , whose disagreement with the three phylogenetic networks is very extensive that all coalescence events must occur above the root of the phylogenetic networks ( fig .[ fig : analysis1 ] in the appendix ) .we ran both ac - based methods on every pair of phylogenetic network and gene tree . for the parsimony method ,if the gene tree is a contained tree of the species network , it can be reconciled into the species network with extra lineages . in this case , for every articulation node of the network , has only one element and , and the running time is almost the same for all three networks and it is very fast ( table [ table : analysis1_mp ] in the appendix ) .however , for gene tree whose coalescent events have to happen all above the root , for every articulation node , has only one element and where equals the number of leaf nodes under .we know that at a reticulation node every configuration will give rise to configurations to each of its parents .therefore , the running time of increased when the number of nodes under the reticulation nodes in the species network increased , and who is a parent of the reticulation node has the largest set and where is the number of leaves under ( table [ table : analysis1_mp ] in the appendix ) .furthermore , we found that the number of valid allele mappings when using the mul - tree based method is equal to the largest size of generated for a node during the computation when all the coalescent events have to happen above the root of the species network if we do not reduce the number of configurations for articulation nodes .this is easy to see . for the ac - based algorithm , if we do not clear the splitting information at articulation nodes , then every element in , where is the root of the network , represents a different combination of the ways every leaf lineage took at every reticulation node . andevery valid allele mapping also represents the same thing .however , for most of the gene trees , not all coalescent events have to happen above the root , and that is part of where the ac - based algorithm improves upon the mul - tree based one . comparing and can see that for parsimony reconciliations , the more coalescent events that are allowed to occur under reticulation nodes with respect to the topology of the gene tree , the faster the method is . for the probabilistic method , since we need to keep all configurations so as to cover all possible coalescence patterns , the gene trees whose coalescent events have to happen above the root become the easiest case because they have only one reconciliation .it is exactly the opposite to the parsimony method where the gene trees whose coalescent events have to happen above the root take longest running time ( table [ table : analysis1_ml ] in the appendix ) . for the mul - tree based method , the probability is computed by summing up the probabilities of all coalescent histories in mul - tree under all valid allele mappings .however , for most cases using acs to compute the probability of a gene tree given a species tree is much faster than through enumerating coalescent histories due to the fact that the number of coalescent histories is much larger than the number of configurations generated .that is part of the reason why the ac based algorithm outperforms the mul - tree based one for computing the probability in terms of efficiency . a third factor that impacts performance is the dependency of the reticulation nodes in the phylogenetic network ( roughly , how many of them fall on a single path to the root ) .for parsimonious reconciliations , when the reticulation events are independent ( or , less dependent ) , the method is much faster .this is not surprising , given that almost all nodes are articulation nodes and the number of acs is reduced significantly . for the probabilistic reconciliation , a similar trend holds , and the dependence of the reticulation nodes results in an explosion in the number of acs .these results are given in more detail in fig .[ fig : analysis2 ] and tables [ table : analysis2_mp ] and [ table : analysis2_ml ] in the appendix . to sum up , for the data sets of the same size ( e.g. , number of taxa and reticulation nodes ) , the running time of the ac - based algorithms increases when there are more leaves under reticulation nodes and when the reticulation nodes are more dependent on each other .with respect to the topology of the gene tree and the species network , the more coalescent events that are allowed under reticulation nodes the faster the parsimony method is , and the opposite for the probabilistic method . for most cases ,the ac - based methods are significantly much faster than the mul - tree based ones . for parsimony ,the gain in terms of efficiency comes from avoiding considering useless allele mappings including the ones that can not yield the optimal reconciliation implied by the coalesced lineages in the configurations and the ones that correspond to the configurations being removed at articulation nodes . for probabilistic reconciliation, the gain comes from two parts .one is also avoiding considering useless allele mappings by removing corresponding configurations at articulation nodes .the other is using ac to compute the probability instead of enumerating the coalescent histories . 10 m. l. arnold . .oxford university press , oxford , 1997 .the role of hybridization in evolution ., 10(3):551568 , 2001 .beiko and n. hamilton .phylogenetic identification of lateral genetic transfer events ., 6 , 2006 . j.h . degnan and n.a .gene tree discordance , phylogenetic inference and the multispecies coalescent . , 24(6):332340 , 2009 .degnan and l.a .gene tree distributions under the coalescent process . , 59:2437 , 2005 .j. guohua , l. nakhleh , s. snir , and t. tuller .inferring phylogenetic networks by the maximum parsimony criterion : a case study . , 24(1):324337 , 2007 .d. gusfield , s. eddhu , and c.h .optimal , efficient reconstruction of phylogenetic networks with constrained recombination ., 2:173213 , 2004 .holland , s. benthin , p.j .lockhart , v. moulton , and k.t .huber . using supernetworks to distinguish hybridization from lineage - sorting ., 8:202 , 2008 .huber , b. oxelman , m. lott , and v. moulton . reconstructing the evolutionary history of polyploids from multilabeled trees ., 23(9):17841791 , 2006 .r. r. hudson .generating samples under a wright - fisher neutral model of genetic variation ., 18:337338 , 2002 .g. jin , l. nakhleh , s. snir , and t. tuller .maximum likelihood of phylogenetic networks ., 22(21):26042611 , 2006 .s. joly , p. a. mclenachan , and p. j. lockhart . a statistical approach for distinguishing hybridization and incomplete lineage sorting ., 174(2):e54e70 , 2009 . l. s. kubatko . identifying hybridization events in the presence of coalescence via model selection ., 58(5):478488 , 2009 . c. r. linder and l. h. rieseberg . reconstructing patterns of reticulate evolution in plants . , 91:17001708 , 2004 .l. liu , l. l. yu , l. kubatko , d. k. pearl , and s. v. edwards .coalescent methods for estimating phylogenetic trees . , 53:320328 , 2009 .d. macleod , r.l .charlebois , f. doolittle , and e. bapteste .deduction of probable events of lateral gene transfer through comparison of phylogenetic trees by recursive consolidation and rearrangement .. w. p. maddison .gene trees in species trees .46:523536 , 1997 . j. mallet .hybridization as an invasion of the genome . , 20(5):229237 , 2005 .j. mallet .hybrid speciation . , 446:279283 , 2007 .c. meng and l. s. kubatko . detecting hybrid speciation in the presence of incomplete lineage sorting using gene tree incongruence : a model ., 75(1):3545 , 2009 .l. nakhleh .evolutionary phylogenetic networks : models and issues . in l.heath and n. ramakrishnan , editors , _ the problem solving handbook for computational biology and bioinformatics _ , pages 125158 .springer , new york , 2010 .l. nakhleh , g. jin , f. zhao , and j. mellor - crummey .reconstructing phylogenetic networks using maximum parsimony . in _ proceedings of the 2005 ieee computational systems bioinformatics conference ( csb2005 ) _ , pages 93102 , 2005 .l. nakhleh , d. ruths , and l.s .wang . : a fast and accurate heuristic for reconstrucing horizontal gene transfer . in l.wang , editor , _ proceedings of the eleventh international computing and combinatorics conference ( cocoon 05 ) _ , pages 8493 , 2005 .lncs # 3595 .park , g. jin , and l. nakhleh .algorithmic strategies for estimating the amount of reticulation from a collection of gene trees . in _ proceedings of the ninth annual international conference on computational systems biology _ , pages 114123 , 2010 .park , g. jin , and l. nakhleh . bootstrap - based support of hgt inferred by maximum parsimony ., 10:131 , 2010 .park and l. nakhleh .inference of reticulate evolutionary histories by maximum likelihood : the performance of information criteria . , 2012 .to appear .h.j . park and l. nakhleh . : a fast heuristic for inferring parsimonious phylogenetic networks from multiple gene trees . in _ proceedings of the international symposium on bioinformatics research and applications (isbra 12 ) _ , volume 7292 of _ lecture notes in bioinformatics _ , pages 213224 , 2012 .a. rambaut .phylogen v1.1 . , 2012 .hybrid origins of plant species ., 28:359389 , 1997 . c. than and l. nakhleh .species tree inference by minimizing deep coalescences . , 5(9):e1000501 , 2009 . c. than , d. ruths , h. innan , and l. nakhleh . confounding factors in hgt detection : statistical error , coalescent effects , and multiple solutions . , 14:517535 , 2007 . c. than , d. ruths , and l. nakhleh . : a software package for analyzing and reconstructing reticulate evolutionary relationships . , 9:322 , 2008 .l. van iersel , s. kelk , r. rupp , and d.h .phylogenetic networks do not need to be complex : using fewer reticulations to represent conflicting clusters ., 26(12):i124i131 , june 2010 .y. wu . close lower and upper bounds for the minimum reticulate network of multiple phylogenetic trees . , 26(12):140148 , 2010 .coalescent - based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood . , 66:763775 , 2012 . y. yu , r.m .barnett , and l. nakhleh .parsimonious inference of hybridization in the presence of incomplete lineage sorting , 2012 . under review . y. yu , j.h .degnan , and l. nakhleh .the probability of a gene tree topology within a phylogenetic network with applications to hybridization detection ., 8:e1002660 , 2012 .y. yu , c. than , j.h .degnan , and l. nakhleh .coalescent histories on phylogenetic networks and detection of hybridization despite incomplete lineage sorting . , 60:138149 , 2011 ..the results of running the ac - based algorithm for computing the minimum number of extra lineages given gene trees and species networks in fig . [ fig : analysis1 ] . is the number of configurations at the reticulation node and is the maximum number of configurations generated at a node during computation .we labeled the first node in post - order of traversal that contains the largest set by in fig .[ fig : analysis1 ] .furthermore , the last column is the number of valid allele mappings if using the mul - tree based method.[table : analysis1_mp ] [ cols="^,^,^,^,^,^,^,^ " , ]
|
reconciling a gene tree with a species tree is an important task that reveals much about the evolution of genes , genomes , and species , as well as about the molecular function of genes . a wide array of computational tools have been devised for this task under certain evolutionary events such as hybridization , gene duplication / loss , or incomplete lineage sorting . work on reconciling gene tree with species phylogenies under two or more of these events have also begun to emerge . our group recently devised both parsimony and probabilistic frameworks for reconciling a gene tree with a phylogenetic _ network _ , thus allowing for the detection of hybridization in the presence of incomplete lineage sorting . while the frameworks were general and could handle any topology , they are computationally intensive , rendering their application to large datasets infeasible . in this paper , we present two novel approaches to address the computational challenges of the two frameworks that are based on the concept of _ ancestral configurations_. our approaches still compute exact solutions while improving the computational time by up to five orders of magnitude . these substantial gains in speed scale the applicability of these unified reconciliation frameworks to much larger data sets . we discuss how the topological features of the gene tree and phylogenetic network may affect the performance of the new algorithms . we have implemented the algorithms in our phylonet software package , which is publicly available in open source .
|
many of our basic conceptions about the nature of physical reality inevitably turn out to have been false , as novel empirical evidence is obtained , or paradoxical implications stemming from those concepts are eventually realised .this was expressed well by einstein , who wrote _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what is essential , which is based solely on accidents of development? concepts that have proven useful in the order of things , easily attain such an authority over us that we forget their earthly origins and accept them as unalterable facts.the path of scientific advance is often made impassable for a long time through such errors .it is therefore by no means an idle trifling , if we become practiced in analysing the long - familiar concepts , and show upon which circumstances their justification and applicability depend , as they have grown up , individually , from the facts of experience . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ or , as he put it some years later , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the belief in an external world independent of the percipient subject is the foundation of all science .but since our sense - perceptions inform us only indirectly of this external world , or physical reality , it is only by speculation that it can become comprehensible to us . from thisit follows that our conceptions of physical reality can never be definitive ; we must always be ready to alter them , to alter , that is , the axiomatic basis of physics , in order to take account of the facts of perception with the greatest possible logical completeness . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and so it is in the same spirit , that i shall argue against a number of concepts in the standard cosmological picture that have changed very little in the past century , by making note of original justifications upon which they were based , and weighing those against empirical data and theoretical developments that have been realised through the intervening years .the essay will concentrate initially on the nature of cosmic expansion , which lacks an explanation in the standard cosmological model . through a discussion of the early developments in cosmology, a familiarity with the pioneering conception of expansion , as being always driven by a cosmological constant , will be developed , upon which basis it will be argued that the standard model which can not reconcile with this view affords only a very limited description .then , the nature of time in relativistic cosmology will be addressed , particularly with regard to the formulation of ` weyl s postulate ' of a cosmic rest - frame. the aim will therefore be towards a better explanation of cosmic expansion in general , along with the present acceleration that has recently become evident , by reconceiving the description of time in standard cosmology , as an approach to resolving this significant shortcoming of the big bang friedman - lematre - robertson - walker ( flrw ) models , and particularly the flat model that describes the data so well .the expansion of our universe was first evidenced by redshift measurements of spiral nebulae , after the task of measuring their radial velocities was initiated in 1912 by slipher ; and shortly thereafter , de sitter attempted the first relativistic interpretation of the observed shifts , noting that ` the frequency of light - vibrations diminishes with increasing distance from the origin of co - ordinates ' due to the coefficient of the time - coordinate in his solution . but the concept of an expanding universe , filled with island galaxies that would all appear to be receding from any given location at rates increasing with distance , was yet to fully form . for one thing ,when de sitter published his paper , he was able to quote only three reliable radial velocity measurements , which gave merely odds in favour of his prediction .however , in 1923 eddington produced an updated analysis of de sitter space , and showed that the redshift de sitter had predicted as a phenomenon of his statical geometry was in fact due to a cosmical repulsion brought in by the -term , which would cause inertial particles to all recede exponentially from any one .he used this result to support an argument for a truly expanding universe , which would expand everywhere and at all times due to .this , he supported with an updated list of redshifts from slipher , which now gave odds in favour of the expansion scenario . that same year , weyl published a third appendix to his _ raum , zeit , materie _ , and an accompanying paper , where he calculated the redshift for the ` de sitter cosmology ' , the explicit form of which would only be found later , independently by lematre and robertson .weyl was as interested in the potential relevance of de sitter s solution for an expanding cosmology as eddington , and had indeed been confused when he received a postcard from einstein later that year ( einstein archives : [ 24 - 81.00 ] ) , stating , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ with reference to the cosmological problem , i am not of your opinion .following de sitter , we know that two sufficiently separate material points are accelerated from one another . if there is no quasi - static world , then away with the cosmological term ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ eight days after this was posted , einstein s famous second note on friedman s paper , which he now referred to as ` correct and clarifying ' , arrived at _ zeitschrift fr physik_. einstein evidently had in mind that the cosmic expansion can be described with set to zero in friedman s solution , and he might have thought weyl would notice and make the connection but the latter evidently did not , as he wrote a dialogue the following year in which the proponent of orthodox relativity eventually states , ` if the cosmological term fails to help with leading through to mach s principle , then i consider it to be generally useless , and am for the return to the elementary cosmology'that being a particular foliation of minkowski space , which , of the three cosmological models known to weyl , was the only one with vanishing . at this point in the dialogue , the protagonist paulus perseveres , citing the evidence for an expanding universe , and therefore the de sitter cosmology as the most likely of the three known alternatives .weyl s excitement over its description is evident in paulus final statement : ` if i think about how , on the de sitter hyperboloid the world lines of a star system with a common asymptote rise up from the infinite past [ see fig . [fig : ds_lr ] ] , then i would like to say : the world is born from the eternal repose of ` father ther ' ; but once disturbed by the ` spirit of unrest ' ( _ hlderlin _ ) , which is at home in the agent of matter , ` in the breast of the earth and man ' , it will never come again to rest . ' indeed , as eq .( [ ds_lr ] ) indicates , and as illustrated in fig .[ fig : ds_lr ] , the universe emerges from a single point at , even though slices of constant cosmic time are infinitely extended thereafter and comoving geodesics _ naturally _ disperse throughout the course of cosmic time .thus , we have a sense of the concept of cosmic expansion that was common amongst the main thinkers in cosmology in the 1920s , who were considering the possibility of expansion driven by the cosmical repulsion in de sitter space .indeed , hubble was aware of this concept , as he wrote of the ` de sitter effect ' when he published his confirmation of cosmic expansion in 1929 ; and de sitter himself , in 1930 , wrote of as ` a measure of the inherent expanding force of the universe ' .thus , along with the evidence that our universe actually _ does _ expand , one had in - hand the description of a well - defined force to _ always _ drive that expansion .it was therefore a huge blow to eddington , e.g. , when in 1932 einstein and de sitter finally rejected that interpretation of cosmic expansion , in favour of a model that could afford no prior explanation for _ why _ the universe _ should _ expand .as he put it , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the theory recently suggested by einstein and de sitter , that in the beginning all the matter created was projected with a radial motion so as to disperse even faster than the present rate of dispersal of galaxies , leaves me cold .one can not deny the possibility , but it is difficult to see what mental satisfaction such a theory is supposed to afford . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to see why the big bang flrw models with matter provide no explanation of expansion , for the reason stated by eddington , we need only look at friedman s equation , which describes the dependence of the scale - factor , , on and the density , , and pressure , , of matter .since goes like for radiation or for non - relativistic matter , the _ decelerative _ force due to finite matter - densities blows up exponentially as , while the _ accelerative _ force due to vanishes ; so the ` inherent expanding force of the universe ' only contributes to the expansion of space later on , when the relative contributions of matter and radiation have sufficiently weakened .therefore , aside from weyl s vacuous de sitter cosmology , with its big bang singularity at , the big bang flrw models can never _ explain _ the cosmic expansion they describe , which must be caused by the big bang singularity itself i.e ., where the theory blows up .but since the cosmic microwave background radiation ( cmbr ) indicates that the universe _ did _ begin in a hot dense state at a finite time in the past , the model eddington had favoured instead ( in which an unstable einstein universe that existed since eternity would inevitably begin expanding purely due to ) also ca nt be accepted .the principal source of standard cosmology s great _ explanatory deficit _ is the fact that although the non - vacuous big bang flrw models do _ describe _ expanding universes and in particular the flat model describes the observed expansion of our universe very well they afford no reason at all for _ why _ those universes _ should _ expand , since that could only be due to the initial singularity ; i.e. , as we follow the models back in time , looking for a possible cause of expansion , we eventually reach a point where the theory becomes undefined , and call that the cause of it all .in contrast , i ve discussed two flrw models , neither of which is empirically supported , which would otherwise better _ explain _ the expansion they describe , as the result of a force that is well - defined in theory . the basic cause and nature of cosmic expansion , along with its recently - observed acceleration ,are significant problems of the standard model ; so , condisering the evidence that the acceleration is best described by pure , there is strong motivation to search for an alternative big bang model that would respect the pioneering concept of expansion , as a direct consequence of the ` de sitter effect ' in the modified einstein field equations .it is therefore worth investigating the axiomatic basis of the robertson - walker ( rw ) line - element .as i will eventually argue that the problem lies in the basic assumptions pertaining to the description of cosmic time , i ll begin by discussing some issues related to the problem of accounting for a cosmic present .the problem of recognising a cosmic present is that , according to relativity theory , it should not be possible to assign one time - coordinate to the four - dimensional continuum of events that could be used to describe objective simultaneity , since two events that are described as simultaneous in one frame of reference will not be described as such by an observer in relative motion .however , as noted by bondi , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the newtonian concept of the uniform omnipresent even - flowing time was shown by special relativity to be devoid of physical meaning , but in 1923 h. weyl suggested that the observed motions of the nebulae showed a regularity which could be interpreted as implying a certain geometrical property of the substratum .this in turn implies that it is possible to introduce an omnipresent _ cosmic time _ which has the property of measuring _proper time _ for every observer moving with the substratum . in other words, whereas special relativity shows that a set of arbitrarily moving observers could not find a common ` time ' , the substratum observers move in such a specialized way that such a public or cosmic time exists .although the existence of such a time concept seems in some ways to be opposed to the generality , which forms the very basis of the general theory of relativity , the development of relativistic cosmology is impossible without such an assumption . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in fact , as einstein himself noted in 1917 , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the most important fact that we draw from experience as to the distribution of matter is that the relative velocities of the stars are very small as compared with the velocity of light .so i think that for the present we may base our reasoning upon the following approximative assumption .there is a system of reference relatively to which matter may be looked upon as being permanently at rest . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus , the assumption of a cosmic rest - frame and a corresponding cosmic time was justified in the derivation of einstein s ` cylindrical ' model .while einstein originally proposed this as an ` approximative assumption ' that the empirical evidence seemed to support , the fact that he did restore absolute time when it came to the problem of describing the universe on the largest scale was not lost on his peers .de sitter was immediately critical of the absolute time variable in einstein s model , noting that ` such a fundamental difference between the time and the space - coordinates seems to be somewhat contradictory to the complete symmetry of the field - equaitons and the equations of motion ' .and a few years later , eddington wrote that an objection to einstein s theory may be urged , since absolute space and time are restored for phenomena on a cosmical scale just as each limited observer has his own particular separation of space and time , so a being coexistive with the world might well have a special separation of space and time natural to him .it is the time for this being that is here dignified by the title absolute. therefore , he concluded , ` some may be inclined to challenge the right of the einstein theory to be called a relativity theory .perhaps it has not all the characteristics which have at one time or another been associated with that name ' indeed , although the assumption of an absolute time in relativistic cosmology is definitely not in the spirit of relativity , the theory is nt fundamentally incompatible with such a definition .furthermore , it is significant that despite such early criticisms , einstein never wavered in assuming an absolute time when he came to consider the cosmological problem , i.e. as he always favoured the friedman solutions ( with ) , which begin by postulating the same .so , we have two opposing descriptions of relativistic time both of which are principally due to einstein himself!and what i ll now argue is that developments both in cosmology and in our understanding of relativity theory which have taken place in the past century demand the latter that there is one absolute cosmic time relative to which every observer s proper time will measure , as space - time will be perceived differently due to their absolute motion through the cosmic present that must be uniquely and objectively defined rather than the former implication of einstein s 1905 theory of relativity .in the case of special relativity , a description in which space - time emerges as a clearly defined absolute cosmic present endures , can be realised by considering four - dimensional minkowski space , as a background structure , and a three - dimensional universe that actually flows equably though it with the past space - time continuum emerging as a purely ideal set of previous occurrences in the universe . then, if we begin in the cosmic rest - frame , in which fundamental observers world lines will be traced out orthogonal to the cosmic hyperplane , photons can be described as particles that move through that surface at the same rate as cosmic time , thus tracing out invariant null - lines in space - time . in this way , the evolution of separate bodies , all existing in one three - dimensional space , forms a graduating four - dimensional map .the causal and inertial structures of special relativity are thus reconciled by describing the world lines of all observers in uniform motion through the cosmic present as their proper time axes , and rotating their proper spatial axes accordingly , so that light will be described as moving at the same rate in either direction of proper ` space ' .and then , so that the speed of photons along invariant null - lines will actually be the same magnitude in all inertial frames , both the proper space and time axes in these local frames must also be scaled hyperbolically relative to each other .this description of the emergence of space - time in a special relativistic universe can be illustrated in the following way .consider a barograph , consisting of a pen , attached to a barometer , and a sheet of paper that scrolls under the pen by clockwork .the apparatus may be oriented so that the paper scrolls downwards , with changes in barometric pressure causing the pen to move purely horizontally .we restrict the speed of the pen s horizontal motion only so that it must always be less than the rate at which the paper scrolls underneath it .the trace of the barometric pressure therefore represents the world line of an arbitrarily moving observer in special relativistic space - time , with instantaneous velocity described in this frame by the ratio of its speed through the horizontal cosmic present and the graph paper s vertical speed , with ` speed ' measured in either case relative to the ticking of the clockwork mechanism , which therefore cancels in the ratio .now , in order to illustrate the relativity of simultaneity , we detach the pen ( call it ) from the barometer so that it remains at rest absolutely , and add another pen , , to the apparatus , at the exact same height , which moves horizontally at a constant rate that s less than the constant rate that the paper scrolls along ; therefore , with _ absolute velocity _ less than the absolute speed limit .furthermore , we make and ` observers ' , by enabling them to send and receive signals that transmit horizontally at the same rate ( in clockwork time ) as absolute time rolls on ( in clockwork time ) , thus tracing out lines on the graph paper with unit speed .as this system evolves , the two ` timelike observers ' can send these ` photons ' back and forth while a special relativistic space - time diagram is traced out .if we d rather plot the map of events in coordinates that give the relevant description from s perspective , we use the lorentz transformation equations corresponding to the description of the map as minkowski space - time : a spacelike line is drawn , tilted from the horizontal towards s world line by the appropriate angle , and the events along that surface are described as synchronous in that frame , even though they take place sequentially in real time . in particular, at the evolving present , s proper spatial axis extends , in one direction , onto the empty sheet of graph paper in which events have not yet occurred , and , in the other direction , into the past space - time continuum of events that have already been traced onto the paper while the real present hyperplane , where truly simultaneous events are occurring , is tilted with respect to that axis of relative synchronicity .the main difference between this interpretation of special relativity and einstein s original one , is that ` simultaneity ' and ` synchronicity ' have objectively different meanings for us , which coincide only in the absolute rest frame whereas einstein established an ` operational ' concept of simultaneity , so that it would be synonimous with synchronicity , in section 1 , part 1 of his first relativity paper .einstein s definition of simultaneity is a basic assumption that s really no less arbitrary than newton s definitions of absolute space , time , and motion ; and , as i ll argue , the evidence from cosmology now stands against einstein s wrong assumption , as it is really more in line with newton s . the distinction between simultaneity and synchronicity in this different interpretation of relativity , can be understood more clearly through our barograph example , by adding two more ` observers ' , and , which remain at rest relative to , with positioned along the same hyperplane as and , and positioned precisely at the intersection of s world line ( so that the world lines of and exactly coincide , as they are traced out on the space - time graph ) and s proper spatial axis ( therefore , on a different hyperplane than , , and ) ; thus , shall not be causally connected to , , and , since _ by definition _ information can only transmit along the cosmic hyperplane ; see fig .[ fig : elementary cosmology ] . and appear to coincide , is disconnected from the causally coherent set , .,title="fig:",height=160 ] and appear to coincide , is disconnected from the causally coherent set , .,title="fig:",height=160 ] the significant point that is clearly illustrated through the addition of and , is that although in the proper coordinate system of ( or or ) , appears to exist synchronously and at rest relative to , in contrast appears to exist in s ( spacelike separated ) past or future ( depending on the direction of absolute motion ; in fig .[ fig : elementary cosmology ] appears to exist in s relative past)is really the causally connected neighbour that remains relatively at rest , with which it should be able to synchronise its clock in the usual way ; i.e. , the synchronisation of s and s clocks will be _ wrong _ because _ simultaneous noumena will not be perceived as synchronous phenomena in any but the cosmic rest - frame_. according to this description , we should have to relinquish the concept that there can be no priviliged observers , as well as einstein s light - postulate in its original form .with regard to the latter , consider that photons will still be described as travelling at a constant speed in all directions of all reference frames , due to the invariance of null - lines .but this wo nt actually be true , since an observer moving through the universe will keep pace better with a photon in their direction of motion , and will remain closer to that photon at all later times , on the cosmic hyperplane .therefore , although light actually wo nt recede as quickly through the universe in the direction of absolute motion , it can always be described as such in the proper coordinate frame because it travels along invariant null - lines . and with regard to the former concept , it is useful to note galileo s argument that , to a person riding in the cabin of a moving ship , everything inside the cabin should occur just as if the ship were at rest .it was crucial for galileo to make this point by _ isolating _ the inertial system from its relatively moving surroundings as the point would have been less clear , e.g. , if he had argued that when riding in the back of a wagon one can toss a ball straight in the air and have it fall back to the same point within the wagon .however , if one should argue that there _ really _ ca nt be privileged observers in the universe , due to the relativity of inertia , one must go beyond this local - inertial effect the relativity of inertia and consider the frame with respect to its cosmic surroundings in which case the argument ca nt be justified . forconsider a neutrino , created in a star shortly after the big bang : in the neutrino s proper frame , only minutes may have elapsed since it left the star , throughout which time the galaxies would have formed , etc ., all moving past it in roughly the same direction , at nearly the speed of light .clearly the most reasonable interpretation , however , is that the neutrino has _ really _ been travelling through the universe for the past 13.8 billion years and this description may be given , with the cosmic present uniquely and objectively defined , in all frames including the neutrino s .furthermore , if we would assume that there are no privileged observers , it should be noted that the consequence of describing simultaneity and synchronicity as one and the same thing in all frames is a block universe a temporally singular ` absolute world ' in which ` the distinction between past , present , and future has only the significance of a stubborn illusion ' ; i.e. , ` the objective world simply _ is _ , it does not _happen_. only to the gaze of my consciousness , crawling upward along the life line of my body , does a section of this world come to life as a fleeting image in space which continuously changes in time ' ; ` there is no dynamics within space - time itself : nothing ever moves therein ; nothing happens ; nothing changes . one does not think of particles `` moving through '' space - time , or as ` following along '' their world lines .rather , particles are just `` in '' space - time , once and for all , and the world line represents , all at once , the complete life history of the particle .and so i ve argued against the simultaneity of synchronicity , a reasonably intuitive concept held in common between the theories of both newton and einstein .but is there any _ sensible _ justification for the concept that the space in which events _ really _ take place simultaneously _must _ be orthogonal to the proper time - axis of an inertial observer ? when our theories are interpreted in this way , is that because one can , e.g. , sit down on the floor with legs out in front , raise their right arm out to the side and their left arm up in the air , _ and then stick out their tongue in the direction in which time is flowing _ , for them as much as it is for their entire surroundings ?of course not .this is no more justified for someone who thus defines a right - handed coordinate system while sitting on solid ground , than it is for a person in the cabin of a ship whether that is floating on water or flying through space .therefore , intuition justifies only existence in space that endures with the ticking of everyone s watch and relativity theory _ demands _ that this can not be both coherently defined and synchronous with every inertial observer !now , although it may be argued that the alternative assumption of cosmic time is unobservable metaphysics , and therefore unscientific , that simply is nt true for cosmology does provide strong empirical evidence of an absolute rest - frame in our universe , as follows . as einstein noted already in 1917 , there appears to be a frame relative to which the bodies of our universe are at rest , on average .now , einstein had no idea of the scope of the universe at that time , but already by 1923 weyl realised the significance of this point , which has indeed stood the test of time , when he wrote that ` both the papers by de sitter and eddington lack this assumption on the `` state of rest '' of stars by the way the only possible one compatible with the homogeneity of space and time .without such an assumption nothing can be known about the redshift , of course . ' for it is true , even in de sitter space , that a cosmic time must be assumed in order to calculate redshifts ; e.g. , for particles in the comoving lematre - robertson frame illustrated in fig .[ fig : ds_lr ] and described by eq .( [ ds_lr ] ) , the redshift will be different from that in the frame of comoving particles in the three - sphere which contracts to a finite radius and subsequently expands ( as illustrated by the gridlines of the de sitter hyperboloid in fig .[ fig : ds_lr ] ) according to where describes the three - sphere .the existence of more than one formally distinct rw cosmological model in one and the same space - time thus illustrates the importance of defining a cosmic time . since 1923 , a number of novel observations have strengthened the evidence for a cosmic present , such as hubble s confirmation of cosmic expansion , the detailed measurement of the expansion rate that has lately been afforded through type ia supernovae observations , and the discovery of the cmbr , which gives a detailed signature of the cosmic rest - frame relative to which we are in fact moving , according to the common interpretation of its dipole anisotropy .thus , the assumption of a cosmic present is now very well justified by empirical evidence .although many points should be considered in connection to the description of an absolute cosmic present , such as concepts of time travel , free will , and a causally coherent local description of gravitational collapse in the universe notwithstanding space - time curvature in general , the one consequence that i will note pertains to cosmology , and a better explanation of cosmic expansion . to start , note that in deriving the general line - element for the background geometry of flrw cosmology , robertson required four basic assumptions : i. a congruence of geodesics , ii .hypersurface orthogonality , iii . homogeneity , and iv .i. and ii . are required to satisfy weyl s postulate of a causal coherence amongst world lines in the entire universe , by which every single event in the bundle of fundamental world lines is associated with a well - defined three - dimensional set of others with which it ` really ' occurs simultaneously .however , it seems that ii .is therefore mostly required to satisfy the concept that synchronous events in a given inertial frame should have occurred simultaneously , against which i ve argued above . in special relativity ,if we allow the fundamental world lines to _ set _ the cosmic rest - frame , then the cosmic hyperplane should be orthogonal but that should nt be the case in general . indeed , as i ve shown in my phd thesis , in the cosmological schwarzschild - de sitter ( sds ) solution , for which , is _ timelike _ , and is forever _ spacelike_ , the -coordinate should well describe the cosmic time_ and _ factor of expansion in a universe in which , in the coordinates carried by fundamental observers , the cosmic present would not be synchronous , and would evolve in proper time as ,\label{scalefac}\ ] ] which is _ incidentally _ also the flat scale - factor of the standard model , that s been empirically constrained this past decade ; see appendix [ sec_csdscs ] for a derivation of eq .( [ scalefac ] ) beginning from eq .( [ sds_statical ] ) , and a discussion of the result s connection to cosmology .this is the rate of expansion that _ all _ observers would measure , if distant galaxies were themselves all roughly at rest with respect to fundamental world lines .but in contrast to flrw theory , this universe actually has to expand at all a result of the ` de sitter effect ' ; i.e. , if such a universe did come to exist at any infinitesimal time , it would _ necessarily _ expand and in exactly the manner that we observe which may be the closest to an explanation of that as we can achieve .it is , of course , important to stress that this intriguing result is utterly meaningless if simultaneity should rather be defined as synchronicity in a given frame of reference . in that case , as lematre noted , the solution describes flat spatial slices extending from to , with particles continuously ejected from the origin .it is therefore only by reconceiving the relativistic concepts of time and simultaneity that sds can be legitimated as a coherent cosmological model with a common origin and one with the very factor of expansion that we ve measured which really _ should _ expand , according to the view of expansion as being always driven by .+ + * acknowledgements : * thanks to craig callender for reviewing an earlier draft and providing thoughtful feedback that greatly improved this essay . thanks also to the many participants who commented on and discussed this paper throughout the contest , and fqxi for organising an excellent contest and providing criteria that helped shape the presentation of this argument . 47 einstein , a. : ernst mach . phys .z. * 17 * 101 104 ( 1916 ) einstein , a. : maxwell s influence on the evolution of the idea of physical reality . in : thomson , j. j. , ed . : james clerk maxwell : a commemoration volume , pp .cambridge university press ( 1931 ) sitter , w. de : on einstein s theory of gravitation , and its astronomical consequences .third paper .mon . not .r. astron. soc . *78 * , 3 28 ( 1917 ) eddington , a. : the mathematical theory of relativity , 2nd ed .cambridge university press ( 1923 ) weyl , h. : zur allgemeinen relativittstheorie .z. * 24 * , 230 232 ( 1923 ) .english translation : weyl , h. : republication of : on the general relativity theory .relativ . gravitat .* 35 * 1661 1666 ( 2009 ) lematre , g. : note on de sitter s universe . j. math .phys . * 4 * , 188 192 ( 1925 ) robertson , h. p. : on relativistic cosmology .phil . mag . * 5 * , 835 848 ( 1928 ) einstein , a. : notiz zu der arbeit von a. friedmann ber die krmmung des raumes . z. phys . * 16 * , 228 228 ( 1923 ) weyl , h. : massentrgheit und kosmos . naturwissenschaften .* 12 * , 197 204 ( 1924 ) hubble , e. : a relation between distance and radial velocity among extra - galactic nebulae . proc .sci . * 15 * , 168 173 ( 1929 ) sitter , w. de : on the distances and radial velocities of extra - galactic nebulae , and the explanation of the latter by the relativity theory of inertia .* 16 * , 474 488 ( 1930 ) einstein , a. , and sitter , w. de : on the relation between the expansion and the mean density of the universe .sci . * 18 * ( 3 ) 213 214 ( 1932 ) eddington , a. : the expanding universe .cambridge university press ( 1933 ) eddington , a. : on the instability of einstein s spherical world .mon . not .r. astron. soc . * 90 * , 668 678 ( 1930 ) riess , a. g. , et al .: observational evidence from supernovae for an accelerating universe and a cosmological constant .astronom .j. * 116 * , 1009 1038 ( 1998 ) ; perlmutter , s. , et al . :measurements of and from 42 high - redshift supernovae .astrophys .j. * 517 * , 565 586 ( 1999 ) ; riess , a. g. , et al . : type ia supernovae discoveries at from the _ hubble space telescope _ : evidence for past deceleration and constraints on dark energy evolution .astrophys .j. * 607 * 665 687 ( 2004 ) ; riess , a. g. , et al . : new _ hubble space telescope _ discoveries of type ia supernovae at : narrowing constraints on the early behavior of dark energy .astrophys .j. * 659 * 98 121 ( 2007 ) ; wood - vasey , w. m. , et al .: observational constraints on the nature of dark energy : first cosmological results from the essence supernova survey .astrophys .j. * 666 * , 694 715 ( 2007 ) ; davis , t. m. , et al . : scrutinizing exotic cosmological models using essence supernova data combined with other cosmological probes .astrophys .j. * 666 * , 716 725 ( 2007 ) ; kowalski , m. , et al .: improved cosmological constraints from new , old , and combined supernova data sets .astrophys .j. * 686 * , 749 778 ( 2008 ) ; hicken , m. , et al . : improved dark energy constraints from new cfa supernova type ia light curves .astrophys .j. * 700 * , 1097 1140 ( 2009 ) ; riess , a. g. , et al . : a 3% solution : determination of the hubble constant with the _ hubble space telescope _ and wide field camera 3 .astroph .j. * 730 * ( 119 ) 1 18 ( 2011 ) ; suzuki , n. , et al . : the _ hubble space telescope _ cluster supernova survey . v. improving the dark - energy constraints above and building an early - type - hosted supernova sample .astrophys .j. * 746 * ( 85 ) 1 24 ( 2012 ) ; hinshaw , d. , et al . :nine - year _ wilkinson microwave anisotropy probe _ ( _ wmap _ )observations : cosmological parameter results .astrophys . j. supplement series * 208 * ( 19 ) 1 25 ( 2013 ) ; ade , a. r. , et al . : _ planck _ 2013 results .cosmological parameters .arxiv:1303.5076 [ astro-ph.co ] ( 2013 ) bondi , h. : cosmology .cambridge university press ( 1960 ) einstein , a. : kosmologische betrachtungen zur allgemeinen relativittstheorie .k. preuss .. 142 152 ( 1917 ) .english translation in lorentz , h. a. et al ., eds . : the principle of relativity , pp . 175 188 .dover publications ( 1952 ) sitter , w. de : on the relativity of inertia .remarks concerning einstein s latest hypothesis .royal acad .amsterdam * 19 * 1217 1225 ( 1917 ) eddington , a. s. : space , time , and gravitation .cambridge university press ( 1920 ) einstein , a. : zum kosmologischen problem der allgemeinen relativittstheorie .235 237 ( 1931 ) einstein , a. : the meaning of relativity , 2nd ed .princeton university press .( 1945 ) einstein , a. : zur elektrodynamik bewegter krper .* 17 * ( 1905 ) .english translation in lorentz , h. a. et al ., eds . : the principle of relativity , pp . 35 65 .dover publications ( 1952 ) putnam ,h. : time and physical geometry . j. phil . * 64 * , 240 247 ( 1967 ) minkowski , h. : space and time . in lorentz ,h. a. et al .the principle of relativity , pp .dover publications ( 1952 ) flsing , a. ; osers , e. , trans .: albert einstein : a biography .viking ( 1997 ) weyl , h. : philosophy of mathematics and natural science .princeton university press ( 1949 ) geroch , r. : general relativity from a to b. university of chicago press ( 1978 ) rugh , s. e. and zinkernagel , h. : weyl s principle , cosmic time and quantum fundamentalism .arxiv:1006.5848 [ gr - qc ] ( 2010 ) janzen ,d. : a solution to the cosmological problem of relativity theory. dissertation .university of saskatchewan .http://hdl.handle.net/10388/etd-2012-03-384 ( 2012 )lematre , g. : cosmological application of relativity .mod . phys . *21 * , 357 366 ( 1949 ) misner , c. w. , thorne , k. s. , and wheeler , j. a. : gravitation . w. h. freeman and company ( 1973 ) dyson , f. j. : missed opportunities* 78 * ( 5 ) , 635 652 ( 1972 ) maudlin , t. : philosophy of physics : space and time .princeton university press ( 2012 ) greene , b. : the fabric of the cosmos .vintage books ( 2004 ) janzen , d. : time is the denominator of existence , and bits come to be in it . submitted to fqxi s 2013 essay contest , _ it from bit or bit from it ? _ ( 2013 ) einstein , a. : autobiographical notes . in schilpp ,p. a. , ed .: albert einstein : philosopher - scientist .open court publishing company .1 94 ( 1949 ) guth , a. h. : inflationary universe : a possible solution to the horizon and flatness problems .d * 23 * ( 2 ) 347 356 ( 1981 ) steinhardt , p. j. : the inflation debate .* 304 * ( 4 ) 36 43 ( 2011 ) ijjas , a. , steinhardt , p. j. , and loeb , a. : inflationary paradigm in trouble after planck2013 .phys . lett .b * 723 * 261 266 ( 2013 ) weinberg , s. : the cosmological constant problem .* 61 * ( 1 ) 1 23 ( 1989 ) carroll , s. m. , press , w. h. , and turner , e. l. : the cosmological constant .astrophys .* 30 * 499 542 ( 1992 ) peebles , p. j. e. , and ratra , b. : the cosmological constant and dark energy . rev . mod .* 75 * ( 2 ) 559 606 ( 2003 ) weinberg , s. : cosmology .oxford university press ( 2008 ) ellis , g. f. r. : a historical review of how the cosmological constant has fared in general relativity and cosmology .chaos , solitons and fractals * 16 * 505 512 ( 2003 ) penrose , r. : the road to reality . vintage ( 2005 ) hawking , s. w. , and penrose , r. : the singularities of gravitational collapse and cosmology .royal soc .london . a , math .* 314 * ( 1519 ) , 529 548 ( 1970 )during the essay contest discussions , the critical remarks on this essay that were most important for me , and were by far the most probing , were those offered by george ellis .professor ellis criticism of the final section indicated , first of all , that the brief mention i made there of a result from my phd thesis was not developed enough to pique much interest in it and in fact that , stated as it was there , briefly and out of context of the explicit analysis leading from eq .( [ sds_statical ] ) to ( [ scalefac ] ) , the point was initially lost .he wrote that the model is ` of course spatially inhomogeneous , ' when the spatial slices _ are _ actually homogeneous , but rather are anisotropic ; and when i pointed out to him that this is so because , in the cosmological form of the sds solution is _ timelike _ and is forever _ spacelike _ , he replied that ` the coordinate notation is very misleading ' .so , one purpose of this appendix is to provide the intermediate calculation between eqs .( [ sds_statical ] ) and ( [ scalefac ] ) , that had to be left out of the original essay due to space limitations and , in developing a familiarity with the notation i ve used , through this calculation , to ensure that no confusion remains in regard to the use of as a timelike variable and as a spacelike one . for the notationis necessary both in order to be consistent with every other treatment of the sds metric to date , and because , regardless of whether is timelike or spacelike in eq .( [ sds_statical ] ) , it really does make sense to denote the coordinate with an ` ' because the space - time is isotropic in that direction . with these `book keeping ' items out of the way ( after roughly the first four pages ) , the appendix moves on to address professor ellis two more substantial criticisms , i.e. regarding the spatial anisotropy and the fact that the model has no dynamic matter in it ; for , as he noted , the model ` is interesting geometrically , but it needs supplementation by a dynamic matter and radiation description in order to relate to our cosmic history ' .these important points were discussed in the contest forum , but were difficult to adequately address in that setting , so the problem is given more proper treatment in the remaining pages of this appendix once the necessary mathematical results are in - hand .specifically , in the course of developing a physical picture in which the sds metric provides the description of a universe that _ would _ appear isotropic to fundamental observers who measure the same rate of expansion that we do , we ll come to a possible , consistent resolution to the problem of accounting for dynamic matter , which leads to a critical examination of the consistency and justification of some of the most cherished assumptions of modern physics .we begin by writing down the equations of motion of ` radial ' geodesics in the sds geometry , using them to derive a description of the sds cosmology that would be appropriate to use from the perspective of _ fundamental _ observers who evolve as they do , always essentially _ because of _ the induced field potential .it will be proved incidentally that the observed cosmological redshifts , in this homogeneous universe which is _ not _ orthogonal to the bundle of fundamental geodesics , must evolve through the course of cosmic time , as a function of the proper time of fundamental observers , with the precise form of the flat scale - factor i.e ., with exactly the form that s been significantly constrained through observations of type ia supernovae , baryon acoustic oscillations , and cmbr anisotropies .since the lagrangian , for timelike -geodesics with proper time in the sds geometry is independent of , the euler - lagrange equations indicate that is conserved ( ) . substituting eq .( [ e ] ) into eq .( [ l_timelike ] ) , then , we find the corresponding equation of motion in : while the value of may be arbitrary , we want a value that distinguishes a particular set of geodesics as those describing particles that are ` fundamentally at rest'i.e . , we ll distinguish a preferred fundamental rest frame by choosing a particular value of that meets certain physical requirements . to begin, we first note that where _ is _ spacelike , eq .( [ rtau ] ) describes the specific ( i.e. , per unit rest - mass ) kinetic energy of a test - particle , as the difference between its ( conserved ) _ specific energy _ and the gravitational field s _ effective potential _ then , it is reasonable to define the fundamental frame as the one in which the movement of particles in and is essentially _ caused _ by the non - trivial field potential i.e ., so that just where the gravitational potential is identically trivial ( ) , and the line - element , eq .( [ sds_statical ] ) , reduces to that of minkowski space . from eq .( [ rtau ] ) , this amounts to setting ; therefore , a value corresponds to a particle that would not come to rest at {6m/\lambda} ] , is spacelike and is timelike regardless of the values of and ; therefore , it is consistent in any case to say that a particle with woud have non - vanishing spatial momentum there .indeed , from eq .( [ e ] ) , we find that at {6m/\lambda} ] is a negative mathematical abstraction that lies beyond a singularity at .in particular , we take to be the specific energy of a test - particle that s at rest with respect to the vanishing of the potential .it s simply a matter of algebraic consistency . therefore , we have as the specific energy of particles that would come to rest in the absence of a gravitational field , which are therefore guided purely through the effective field potential .we therefore use the geodesics with to define a preferred rest frame in the sds geometries which is particularly relevant to consider in the case of the cosmological sds solution , for which say that any particle whose world line is a geodesic with is one that has uniform momentum relative to the fundamental rest frame .we can now write the sds line - element , eq .( [ sds_statical ] ) , in the proper rest frame of a bundle of these fundamental geodesics , which evolve through and all with the same proper time , , and occupy constant positions in ` space ' . since must be positive in order to satisfy the requirement for to be timelike the requirement that the sds line - element be a cosmological rather than a local solution it is more convenient to work with scale - invariant parameters , , , , etc . , normalising all dimensional quantities by the cosmic length - scale ; see , e.g. , 66 in or 4 in ultimately amounts to striking out the factor from the -term in the line - element , or e.g. writing the flat scale - factor , eq .( [ scalefac ] ) , as the evolution of each geodesic through scale - invariant and is then given , through eqs .( [ e ] ) and ( [ rtau ] ) with , as eq .( [ rdot ] ) can be solved in closed - form using , after substituting . taking the positive root ( so that increases with ), we have , where the lower limit on has been arbitrarily set to 0 .thus , in this frame we can express as a function of each observer s proper time and an orthogonal ( i.e. synchronous , with constant ) spatial coordinate , , which may be arbitrarily rescaled without altering the description in any significant way .then , as long as is nonzero , a convenient set of coordinates from which to proceed results from rescaling the spatial coordinate as , which we are anyhow not interested in .an equivalent transformation in that case is found by setting , whence , and eqs .( [ g_chit ] ) , ( [ g_tt ] ) , and ( [ g_chichi ] ) , below , yield the line - element , . ] from which we find , after some rearranging , and , are useful here .( [ r(tau , xi ) ] ) , along with our eventual line - element , eq . ( [ sds_proper ] ) , was originally found by lematre , although his solution to eq .( [ rdot ] ) ( with dimensionality restored ) , ,\ ] ] is too large in its argument by a factor of . ]\right),\label{r(tau , xi)}\ ] ] which immediately shows the usefulness of expressing the arbitrary coordinate in the form written in eq .( [ r(0 ) ] ) , because it allows eq .( [ t_int ] ) to be solved explicitly for .therefore , without loss of generality , we define a new coordinate , which is orthogonal to and exists on .this has no effect on the eventual form of the metric we will derive , due to the chain rule , but allows for a neater calculation .as such , we immediately have the useful result , the transformation , , may then be calculated from furthermore , to solve for , we can gauge the lower limits of the integrals over and , at , by requiring that their difference , defined by sets this calculation is straightforward : ) , since only partial derivatives of are needed here and below . ] so that . now , it is a simple matter to work out the remaining metric components as follows : our chosen reference frame immediately requires according to the lagrangian , eq .( [ l_timelike ] ) ; and by direct calculation , we find but this result is independent of the definition of , as explained above ; for if we had followed through with the more general coordinate , we would have found the metric to transform as , the other components remaining the same .therefore , the sds metric in the proper frame of an observer who is cosmically ` at rest ' , in which the spatial coordinates are required , according to an appropriate definition of , to be orthogonal to , ) is already orthogonal to . ]can generally be written , this proves lematre s result from 1949 , that slices are euclidean , with line - element , however , in the course of our derivation we ve also found that lematre s _ physical _ interpretation that the ` geometry is euclidean on the expanding set of particles which are ejected from the point singularity at the origin'is wrong .it s _ wrong _ to interpret this solution as describing the evolution of synchronous ` space ' that s truncated at the singularity at , from which particles are continuously ejected as increases which always extends from to along those lines of constant .but this is exactly the interpretation one is apt to make , who is accustomed to thinking of synchronous spacelike hypersurfaces as ` space ' that exists ` out there ' , regardless of the space - time geometry or the particular coordinate system that s used to describe it .however , we noted from the outset that these ` radial ' geodesics that we ve now described by the lines , along which particles all measure their own proper time to increase with , describe the world lines of particles that are all fundamentally at rest i.e ., at rest with respect to the vanishing of the effective field potential .these particles should nt all emerge from the origin at different times , yet somehow evolve together as a coherent set , but _ by weyl s principle _they should all emerge from the origin _ together _ , and evolve through the field that varies in _ together for all time_. in that case , space will be homogeneous , since the constant cosmic time slices can be written independent of spatial coordinates ; so every fundamental observer can arbitrarily set its spatial position as and therefore its origin in time as .the spaces of constant cosmic time should therefore be those slices for which .e . , we set as the proper measure of cosmic time in the fundamental rest frame of the universe defined by this coherent bundle of geodesics , so that eq .( [ t(tau , chi ) ] ) becomes the spacelike slices of constant are at angles in the -plane , and are therefore definitely not synchronous with respect to the fundamental geodesics .however , given this definition of cosmic time , the redshift of light that was emitted at and is observed now , at , should be where has exactly the form of the flat scale - factor ( cf .( [ scalefac_scaleinvtau ] ) ) , which is exactly the form of expansion in our universe that has been increasingly constrained over the last fifteen years .+ + + now , in order to properly theoretically interpret this result for the observed redshift in our sds cosmology , it should be considered in relation to flrw cosmology and particularly the theory s basic assumptions . as noted in [ sec_ifc ] , these are : i. a congruence of geodesics , ii . hypersurface orthogonality , iii . homogeneity , and iv .assumptions i. and ii .have a lot to do with how one defines ` simultaneity ' , which i ve discussed both in the context of special relativity in [ sec_tcp ] , and now in the context of the sds cosmology , in which simultaneous events in the course of cosmic time are _ not _ synchronous _ even in the fundamental rest frame_. as the discussion should indicate , the definition of ` simultaneity ' is somewhat arbitrary and it is an _ assumption _ in any case and should be made with the physics in mind .einstein obviously had the physics in mind when he proposed using an operational definition of simultaneity ; but it s since been realised that even special relativity , given this definition , comes to mean that time ca nt pass , etc ., as noted in [ sec_tcp ] . special relativityshould _ therefore be taken as an advance on newton s bucket argument , indicating that not only should acceleration be absolute , as newton showed ( see , e.g. , for a good discussion of newton s argument ) , but velocity should be as well , since time obviously passes which it ca nt do , according to special relativity , if motion is nt _ always _ absolute .usually , however , the opposite is done , and people who ve been unwilling or unable to update the subjective and arbitrary definitions of simultaneity , etc ., from those laid down by einstein in 1905 , have simply concluded that physical reality has to be a four - dimensional block in which time does nt pass , and the apparent passage of time is a stubborn illusion ; see , e.g. , 5 in for a popular account of this , in addition to refs .the discussion in [ sec_tcp ] shows how to move forward with a realistic , physical , _ relativistic _description of objective temporal passage , which can be done only when ` simultaneity ' is not equated with ` synchronicity ' ; and another useful thought - experiment along those lines , which shows how perfectly acceptable it is to assume objective temporal passage in spite of relativistic effects , is presented in my more recent fqxi essay .in contrast to the hardcore relativists who would give up temporal passage in favour of an operational definition of simultaneity , einstein was the first relativist to renege on truly relative simultaneity when he assumed an absolute time in constructing his cosmological model ; and despite immediately being chastised by de sitter over this , he never did balk in making the same assumption whenever he considered the cosmological problem as did just about every other cosmologist who followed , with very few notable exceptions ( e.g. , de sitter was one , as there was no absolute time implicit in his model ) .but whenever the assumption of absolute time has been made in cosmology , it s been made together with special relativity s baggage , as the slices of true simultaneity have been assumed to be synchronous in the fundamental rest frame .now we see that , not only is the operational definition _wrong _ in the case of special relativity ( since it comes to require that time does not pass , which is realistically unacceptable ) , but here we ve considered a general relativistic example in which equating ` simultaneity ' and ` synchronicity ' makes even less sense in terms of a reasonable physical interpretation of the mathematical description , since the interpretation is _causally incoherent_. the main argument of this essay was therefore , that while assumption i. of flrw cosmology is justified from the point of view that relative temporal passage should be coherent , assumption ii .is not , and this unjustified special relativistic baggage should be shed by cosmologists ( and really by all relativists , as it leads to further wrong interpretations of the physics ) .assumption iii . hardly requires discussion .it s a mathematical statement of the cosmological principle that no observer holds a special place , but the universe should look the same from every location and is therefore as fundamental an assumption as the principle of relativity .furthermore , our sds universe _ is _ homogeneous , so there is no problem . the final assumption , however , is a concern .the isotopy of our universe is an empirical fact it looks the same to us in every direction , and must have done since its beginning .in contrast , the spatial slices of the cosmological sds solution are _ not _ isotropic : they re a 2-sphere with extrinsic radius of curvature , multiplied by another dimension that scales differently with . furthermore , by eq .( [ dtdr ] ) we know that all these fundamental world lines move uniformly through this third spatial dimension as increases .the sds cosmology therefore describes a universe that should be conceived as follows : first of all suppressing one spatial dimension , the universe can be thought of as a 2-sphere that expands from a point , while all fundamental observers forever motionless along the surface ; then , the third spatial dimension should be thought of as a line at each point on the sphere , through which fundamental observers travel uniformly in the course of cosmic time .since it s a general relativistic solution , the distinction between curvature along that third dimension of space and motion through it is not well - defined .however , a possibility presents itself through an analogy with the _ local _ form of the sds solution . as with all physically meaningful solutions of the einstein field equations ,this one begins with a physical concept from which a general line - element is written ; then the line - element gets sent through the field equations and certain restrictions on functions of the field - variables emerge , allowing us to constrain the general form to something more specific that satisfies the requisite second - order differential equations .this is , e.g. , also how flrw cosmology is done i.e . , first the rw line - element satisfying four basic physical / geometrical requirements is written down , and then it s sent through the field equations to determine equations that restrict the form of the scale - factor s evolution , under a further assumption that finite matter densities in the universe should influence the expansion rate .the local sds solution , too , is derived as the vacuum field that forever has spacelike radial symmetry about some central gravitating body , and the field equations are solved to restrict the form of the metric coefficients in the assumed coordinate system .but then , as eddington noted , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we reach the same result if we attempt to define symmetry by the propagation of light , so that the cone is taken as the standard of symmetry .it is clear that if the locus has complete symmetry about an axis ( taken as the axis of ) must be expressable by [ the radially isotropic line - element with general functions for the metric coefficients ] . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ therefore , the local sds metric corresponds to the situation in which light propagates isotropically , and its path in space - time is described by the null lines of a lorentzian metric .prior to algebraic abstraction ( i.e. the assumption of a lorentzian metric and a particular coordinate system ) , the geometrical picture is already set ; and it s upon that basic geometrical set - up that the algebraic properties of the general relativistic field are imposed .this construction of the local sds solution through physical considerations of light - propagation will be used analogously in constructing a geometrical picture upon which the cosmological sds solution can be based which indeed shows that , along with describing a homogeneous universe in which the evolution of cosmic redshifts would be measured _ exactly _ as they have been in our universe ( as proven here already ) , the sds universe should appear isotropic in the fundamental rest frame . but before coming to that , a few more general remarks are necessary , for the examples of the local sds and flrw solutions illustrate a profound thing ; viz . that much of the _ physics_ enters into the description already in defining the basic geometrical picture and the corresponding line - element that broadly sets - up the physical situation of interest , which is then further constrained by requiring that it satisfy the specific properties that are generally imposed by einstein s field law .in fact , when it comes to the cosmological problem , and we begin _ as always _ by assuming what will be our _ actual _ space , and how it will roughly evolve as _ actual _ cosmic time passes i.e . by _ first _ assuming prior _ kinematical _ definitions of absolute space and time , and then constructing an appropriate cosmological line - element , which is finally constrained through the _ dynamical _ field law there is really a lot of room to make it as we d like it .but we ve now got a particular line - element in mind ( viz . the sds cosmological solution ) , and we can use it in guiding the basic kinematical definitions that we make . in particular , we have the description of a universe that _ is _ a two - dimensional sphere that expands as a well - defined function of the proper time of fundamental observers that all remain absolutely at rest , plus another dimension through which those same observers _ are _ moving at a rate that varies through the course of cosmic time . according to the equivalence principle , it may be that there is a non - vanishing gravitational field in that particular direction of space , guiding these fundamental observers along , or that this direction is uniform as well , and that the fundamental observers are moving along it , and therefore describe it relatively differently .what s interesting about this latter possibility , is that there would be a fundamental metric describing the evolution of this uniform space through which all the fundamental observers are moving , and that the metric pertaining to them in their proper frames , which they use in describing the evolution of space - time , should nt necessarily _ have _ to be the same fundamental metric transformed to an appropriate coordinate system .it might be defined differently for other physical reasons . in this case ,an affine connection defining those world lines as fundamental geodesics may not be _ compatible _ with the more basic metric , and could be taken as the covariant derivative of a different one .the picture starts to resemble teleparallelism much more closely than it does general relativity ; but since the two theories are equivalent , and we ve recognised that in any case the kinematical definitions are _ more fundamental _ than the covariant dynamical description , since they constitute the basic first principles of the physical theory , we ll press on in this vein . let us suppose a situation where there is actually no gravitational mass at all , but fundamental inertial observers the constituent dynamical matter of our system are _ really _ moving uniformly through a universe that fundamentally _ is _ isotropic and homogeneous , and expands through the course of cosmic time .the fundamental metric for this universe should satisfy even the rw line - element s orthogonality assumption , although the slices of constant cosmic time would _ not _ be synchronous in the rest frame of the fundamental observers . since space , in the two - dimensional slice of the sds universe in which fundamental observers are _ not _ moving , really _ is _ spherical , the obvious choice is an expanding 3-sphere , with line - element where the radius varies , according to the vacuum field equations , as in particular , because a teleparallel theory would require a parallelisable manifold , we note that this is true if and the de sitter metric is recovered ( cf .( [ ds_comoving_univ ] ) ) . while there may be concern because in this case is a minimum of contracting and expanding space, i ll argue below that this may actually be an advantage .this foliation of de sitter space is particularly promising for a couple of reasons : i. the bundle of fundamental geodesics in eq .( [ ds_comoving_univ ] ) are world lines of massless particles , i.e. the ones at for all in eq .( [ sds_statical ] ) with ; and ii . unlike , e.g., the 2-sphere ( or spheres of just about every dimension ) , the 3-sphere _ is _ parallelisable , so it s possible to define an objective direction of motion for the dynamic matter .now we re finally prepared to make use , by analogy , of the derivation of the local sds metric in terms of the propagation of light along null lines . in contrast , we re now beginning from the description of a universe in which particles that _ are nt _ moving through space are massless , but we want to write down a different lorentzian metric to describe the situation from the perspective of particles that _ are _ all moving through it at a certain rate , who define null lines as the paths of relatively moving masselss particles ; so ,we ll write down a new metric to use from the perspective of particles that all move along null lines pointing in one direction of space , who describe the relatively moving paths of massless particles as null lines instead . this new line - element can be written , where points in the direction of the universe s increasing timelike radius , and describes the dimension of space through which the fundamental particles are moving . solving einstein s field equations proves the jebsen - birkhoff theorem that and are independent of the spacelike variable leads to the cosmological sds solution , eq .( [ sds_statical ] ) , as the abstract description of this physical picture .thus , we ve come full circle to a statement of the line - element that we started with .our analysis began with a proof that in this homogeneous universe , redshifts should evolve with exactly the form that they do in a flat universe ; and in the last few pages we ve aimed at describing a physical situation in which this line - element would apply , and the observed large scale structure would be isotropic . and indeed , in this universe , in which the spatial anisotropy in the line - element is an artifact of the motion of fundamental observers through homogeneous and isotropic expanding space _ and _ their definition of space - time s null lines , this would be so ; for as long as these fundamental observers are uniformly distributed in space that really is isotropic and homogeneous , all snapshots of constant cosmic time ( and therefore the development , looking back in time with increasing radius all the way to the cosmological horizon ) should appear isotropic , since these uniformly distributed observers would always be at rest relative to each other .now , as we ve come to the description of a hypothetical universe that _ should _ appear to expand just as ours does , it is time to take stock of the basic physical assumptions we ve had to make , in relation to the ones we should have to reject . our basic starting point , which the essay argued for , was to assume the description of a three - dimensional universe that expands as absolute time passes , just as in standard flrw cosmology . then , having made this assumption , it was shown that the sds solution could be used to describe a homogeneous universe that would appear isotropic in the fundamental rest frame , where redshifts would necessarily evolve in proper cosmic time with precisely the form we do observe in our universe .in contrast to flrw cosmology , however , the spatial slices of constant time are not synchronous in the fundamental rest frame ; so that assumption of flrw cosmology was rejected , as the essay argued .but then , in order to reconcile observed isotropy in this universe , we had to make an assumption about the basic dynamical nature of matter and photons that all mass might be fundamentally inertial , and that photons take paths of maximal length in de sitter space which , in contrast to relativity theory s postulate about the kinematical description of light , would pertain more immediately to its basic physical nature. this additional hypothesis , along with the greater assumption that a vacuum solution of the einstein field equations could be used to model the large scale evolution of our universe , will not be easily accepted , e.g. as professor ellis criticism indicates ; for it entails a rejection of perhaps the most cherished of general relativity s basic assumptions viz . that the world - matter should _ actually _ curve space and influence its evolution , and that the abstract coordinates themselves should have no real geometrical meaning .the latter point was made most significantly by einstein in his autobiography , and was later used as a springboard for the most famous and comprehensive treatment of einstein s theory of gravitation to - date following a brief , but significant statement of the former point , regarding the most basic notion of general relativity ; viz . the concept of space - time - matter reciprocity that ` _ _ space acts on matter , telling it how to move . in turn, matter reacts back on space , telling it how to curve _ _ ' . upon this basis , of einstein s explanation for gravitation, misner , thorne , and wheeler proceed with their brief overview of geometrodynamics , after a quotation from einstein : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ now it came to me: the independence of the gravitational acceleration from the nature of the falling substance , may be expressed as follows : in a gravitational field ( of small spatial extension ) things behave as they do in a space free of gravitation.this happened in 1908 .why were another seven years required for the construction of the general theory of relativity ?the main reason lies in the fact that it is not so easy to free oneself from the idea that coordinates must have an immediate metrical meaning . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ now , _ this _ essay opened with a quotation from einstien s tribute to mach , which was thought to capture the aim of the essay contest and show that questioning _ which of our basic physical assumptions are wrong ?_ is really very much in the spirit of einstein s own way of thinking .only a fragment of einstein s statement was needed to develop a sense of the essay s purpose in that tradition .however , now that we ve come to the point of actually challenging deep - seated ideas about the nature of physical reality , it is worth reading a larger exerpt of the statement , which really pertains to our purpose here in every detail : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what is essential , which is based solely on accidents of development? concepts that have proven useful in the order of things , easily attain such an authority over us that we forget their earthly origins and accept them as unalterable facts .they are then branded as ` necessities of thought ' , ` a priori givens ' , etc .the path of scientific advance is often made impassable for a long time through such errors .it is therefore by no means an idle trifling , if we become practiced in analysing the long - familiar concepts , and show upon which circumstances their justification and applicability depend , as they have grown up , individually , from the facts of experience . for through this , their all - too - great authority will be broken. they will be removed , if they can not be properly legitimated , corrected , if their correlation to given things was far too careless , or replaced by others , if we see a new system that can be established , that we prefer for whatever reasons .this type of analysis appears to the scholars , whose gaze is directed more at the particulars , most superfluous , splayed , and at times even ridiculous .the situation changes , however , when one of the habitually used concepts should be replaced by a sharper one , because the development of the science in question demanded it . then , those who are faced with the fact that their own concepts do not proceed cleanly raise energetic protest and complain of revolutionary threats to their most sacred possessions . in this cry , then , mix the voices of those philosophers who believe those concepts can not be done without , because they had them in their little treasure chest of the ` absolute ' , the ` a priori ' , or classified in just such a way that they had proclaimed the principle of immutability ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ einstein s explanation of ` gravitation ' has indeed proven useful in the order of things ; but we must be careful not to brand this basic _ assumption _ as a ` necessity of thought ' , ` a priori given'for it s just as true that such an overhaul of the basic concepts and assumptions of relativistic cosmology is _ unquestionably demanded _ by the developments in cosmology this past century . indeed ,the five basic assumptions of flrw cosmology i.e ., the four assumptions that need to be made in order to write down the basic line - element , and the assumption that the curvature and evolution of space should causally depend on the density of world - matter are _ not even mutually compatible _ ; and , apart from the cmbr , _none _ of what s been measured this past century not one part of the name ` flat '!is theoretically _ expected _ , and therefore no significant aspect of our universe except the 13.8 billion year old relic radiation , can even remotely be thought of as _explained_. forwhile the curvature and evolution of space is supposed to causally depend on the matter - density , according to the very basis of einstein s theory of gravitation , the assumption that space is isotropic and homogeneous is not justified as long as the universe s age is finite . therefore , while this guiding principle that einstein used in the derivation of his static universe model was somewhat justified in that case , the existence of a causal event horizon in a universe of finite age means assuming the universe _ really is _ homogeneous and isotropic on the largest scale is completely unjustified .the horizon problem , together with the fact that the universe appears to be flat even though it really should nt be , are problems that inflation theory is supposed to solve i.e ., inflation is supposed to explain why a universe that actually _ is nt _ homogeneous and isotropic on the largest scale ( which is the type of universe that s expected , according to general relativity theory ) would _ appear _ isotropic all the way out to the current cosmological event horizon , and also why it should be flat when we d otherwise expect it to be far from that .but inflation theory may be fundamentally flawed , as the particular form of inflation that s consistent with the data should have to be so fine - tuned that it really provides no natural escape from having to fine - tune an isotropic , homogeneous , and flat universe whose curvature and evolution causally depends on its matter - density .in addition to the problems that are supposed to be addressed by inflation theory , and its explanatory shortfalls , the standard cosmological model also describes a universe full of ` cold dark matter ' and ` dark energy ' , with only about 5% of its matter - density comprised of regular stuff .dark energy was originally hoped to be linked to inflation and a solution of the cosmological constant problem , through a quintessence field ( see , e.g. , ) ; but the data indicate that the parameter likely is simply a cosmological constant . if that turns out to be true , it will be as ellis has remarked , ` the triumph of an unwanted guest'for , as e.g. penrose has admitted , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for my own part , in common with most relativity theorists , although normally allowing for the possibility of a non - zero in the equaitons , i had myself been reluctant to accept that nature would be likely to make use of this term . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and finally , as discussed in [ sec_oce ] , even the simple fact that the universe is expanding is _ entirely unexplained _ in the context of flrw big bang cosmology .in fact , not only is an answer to the question ` why should the universe be expanding ? ' ( which would amount to an explanation ) lacking in this picture , but since the force that s supposed to drive scale factor evolution in these universes viz .the matter - density is _ decelerative _ to begin with , such universes actually_ should nt ever _ begin to expand . andthe one possible fundamental aspect of physical reality we know of that could explain why the universe _ should _ expand in the course of cosmic time commonly rejected from a physical point of view as an ` unwanted guest ' . in spite of ( or because of, depending on how one looks at it ) great progress that s been made in experimental cosmology , the _ theory _ , apart from the physics of the cmbr , is a complete mess .enormous advances have been made through the observation of the big bang s relic radiation _ because we have an understanding of what it is_. in contrast , the rest is an unexpected mess of issues that want explanation and have therefore limited the progress of knowledge . as ijjas et al. recently noted , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in testing the validity of any scientific paradigm , the key criterion is whether measurements agree with what is expected given the paradigm ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and while their analysis concentrated specifically on the inflation paradigm ( which fails that test ) , this statement really does pertain to cosmology in a much broader context which , through unyielding adherence to flawed concepts , has continued to describe our universe as one that should basically be something other than what we observe , assuming that the many problems with its appearances can be resolved by dressing up our expectations ( i.e. without fundamentally altering them ) so they turn out to look like the observations . this common way of ` doing business 'should really be rejected once and for all , and be replaced with a basic conviction that _ physical reality really should be as it is_which is something we _ can _ describe , given the right axiomatic basis .and so , in light of the fact that so much of our cosmological observations have really been unexpected , and in the same spirit of critical analysis that einstein noted one usually takes in order to determine ` which of our basic physical assumptions are wrong ' , this appendix began with a sharper assertion on the absolute nature of cosmic time than the one that has anyhow always been weakly made in cosmology and a corresponding rejection of one of flrw cosmology s basic assumptions , as the essay argued_because the development of the science in question demands it_. then , through specific analysis of the cosmological sds solution , with the aim of determining in what physical context that could possibly be taken to describe an expanding universe like ours , we found that it might do so just in case the world - matter would nt actually curve the basic fabric of reality , and all mass would be fundamentally inertial in a cosmological context .thus , it was proposed that two basic concepts of general relativity viz . that mass curves space and is in turn guided along in reciprocal action and reaction , and that coordinatesmust have no immediate metrical meaning would also have to be rejected , along with the corresponding principle of relativistic cosmology , that the curvature and evolution of the universe should be determined by its matter content . indeed ,if the universe really does have absolute background structure similar to that which has commonly been assumed in relativistic cosmology , then the specific coordinates that describe that structure _ do _ have immediate metrical meaning that should be consistently and objectively accounted for . and , far from being a _ problem _ of the sds cosmology , the idea that the curvature and evolution of the universe would nt be influenced by locally gravitating matter might be seen as an _ advantage _ of the theory we use to describe our finite - aged universe i.e . in light of the horizon ,flatness , dark matter , and dark energy problems , as well as the problem of explaining expansion , since these all stem from assuming the alternative .in contrast , in the theory described here , cosmic expansion should essentially _ coincide _ with the passage of time , so the universe _ should be expanding _ simply because time is passing and the model predicts it should do so with exactly the form that we observe . when i mentioned to professor ellis in the essay contest discussions , that i think the results i ve presented here should provide some cause to seriously reconsider the assumption that the expansion rate of the universe should be influenced by its material content a fundamental assumption of standard cosmology which professor loeb also challenged in his submission to the contest his response was , ` well its not only of cosmology its gravitational theory .it describes solar system dynamics , structure formation , black holes and their interactions , and gravitational waves .the assumption is that the gravitational dynamics that holds on small scales also holds on large scales .it s worked so far . ' however , given due recognition of the significant problems of modern cosmology , the argument that einstein s explanation of gravitation should be kept locked tight in our little treasure chest of absolute a priori givens because ` it s worked so far ' , seems particularly weak .in fact , if any lesson should be taken from the above statement of einstein s , it must be that ` it s worked so far'in other words , ` it s proven useful in the order of things'is an argument that should _ never ever _ be made against the healthy scepticism that all of our basic physical assumptions must warrant .even so , the rejection of such an important concept must be for a very good reason , and it may very well be objected that the specific geometrical structure of the alternative sds model entails a significant assumption on the fundamental geometry of physical reality , which there should be a reason for ; i.e. , the question arises : if the geometry is not determined by the world - matter , then by what ? while a detailed answer to this question hasnt been worked out , it is relevant to note that the local form of the sds solution which is only parametrically different from the cosmological form upon which our analysis has been based ; i.e. rather than the space - time description outside a spherically symmetric , uncharged black hole , which is exactly the type that s expected to result from the eventual collapse of every massive cluster of galaxies in our universe even if it takes all of cosmic time for that collapse to finally occur .in fact , there seems to be particular promise in this direction , given that the singularity at in our description is nt a real physical singularity , but the artifact of a derivative metric that must be ill - defined there , since space must actually always have a finite radius according to the fundamental metric .this is the potential advantage that was noted above , of the finite minimum radius of the foliation of de sitter space defined in eq .( [ ds_comoving_univ ] ) . andas far as that goes , it should be noted that the penrose - hawking singularity theorem ` can not be directly applied when a _ positive cosmological constant _ is present ' , which is indeed our case .for all these reasons , we might realistically expect a description in which gravitational collapse leads to universal birth , and thus an explanation of the big bang and the basic cosmic structure we ve had to assume .along with such new possibilities as an updated description of collapse , and of gravitation in general , that may be explored in a relativistic context when the absolute background structure of cosmology is objectively assumed , the sds cosmology , through its specific requirement that the observed rate of expansion _ should be _ described exactly by the flat scale - factor , has the distinct possibility to _ explain _ why our universe should have expanded as it evidently has and therein lies its greatest advantage .
|
the discovery that the universe is accelerating in its expansion has brought the basic concept of cosmic expansion into question . an analysis of the evolution of this concept suggests that the paradigm that was finally settled into prior to that discovery was not the best option , as the observed acceleration lends empirical support to an alternative which could incidentally explain expansion in general . i suggest , then , that incomplete reasoning regarding the nature of cosmic time in the derivation of the standard model is the reason why the theory can not coincide with this alternative concept . therefore , through an investigation of the theoretical and empirical facts surrounding the nature of cosmic time , i argue that an enduring three - dimensional cosmic present must necessarily be assumed in relativistic cosmology and in a stricter sense than it has been . finally , i point to a related result which could offer a better explanation of the empirically constrained expansion rate .
|
soon after the first repressor protein had been discovered by franois jacob and jacques lucien monod in 1961 , theoretical work on gene regulation started .typically , a realistic modelling of gene regulation systems includes rate equations for the concentrations of the participating macromolecules , i.e. , mrna and proteins . in vivo experiments permitted to obtain quantitative data for regulatory processes and their kinetic parameters , and they provided important insight into regulatory dynamics .thus , elowitz and leibler designed and constructed a synthetic network out of three transcriptional repressor systems to build an oscillating network , termed the repressilator , in escherichia coli . in recent years , the field of systems biology has emerged , which aims at a quantitative description of cell behavior and basic dynamic processes , and which permits to analyze systems such as gene regulation networks . besides detailed quantitative approaches , the general features of regulatory and signaling processes in living cells also gave rise to a minimalist dynamical description as boolean networks , where the state of each gene is either `` on '' or `` off '' .such a description is particularly useful when dealing with large networks , because it reduces the huge complexity of the continuous system with its many differential equations and parameters to a problem of logical structure which is easier to understand .it permits to study generic features of entire classes of systems , or to reproduce the correct sequence of events in gene regulation networks that must function reliably , such as cell cycle dynamics .so far , little is known about the general conditions under which a boolean simplification gives a realistic picture of the dynamics in gene regulation networks .in contrast to boolean networks , ordinary differential equations ( odes ) , which model the switch - like dynamics of genes by using sigmoidal hill functions , can include more detailed information about transcription and translation processes to evaluate the time course of the gene expression patterns .depending on the parameter values , such models can show oscillating behavior or a stable fixed point . even for small systems of only two genes ,the seems to be no simple relation between boolean and continuous models .widder et al . and polynikis et al . studied a two - gene activator - inhibitor network and investigated in detail the conditions under which a hopf bifurcation occurs , which leads to oscillating behavior .they found that oscillatory behavior is exhibited by the two - gene model only if the hill coefficents are above a certain threshold , and that the system can be driven through the bifurcation by varying the time scales of the mrna and proteins .the boolean version of this system always shows oscillatory behavior .similar results can be found in del vecchio , who included an additional self - input of one gene and varied the time - scale difference between the activator and the respressor and time - scale difference between the protein and mrna dynamics by using bifurcation analysis .they also obtained richer dynamical behavior incorporating mrna dynamics and could define a parameter space that leads to stable limit cycles . in this paper , we present a general and comprehensive investigation of the two - gene network .gross and feudel developed a method of _ generalized models _, which allows to investigate the stability of fixed points and the occurrence of bifurcations in dependence of general features of the system , without the need to specify the steady state or the regulatory functions .this approach enables us to unite all previous studies of this system within one framework , and to include also those situations that had not been studied before .this permits us to identify the main differences between the dynamical behavior and the attractor patterns of boolean and continuous models .the paper is structured as follows . in section [ model ] , we introduce the dynamical equations and the generalized method used for the analysis . section [ results ] presents the conditions for the occurrence of oscillations in the continuous model and compares these to the boolean model .finally , we discuss and compare our findings to previous studies in section [ discussion ] .gene expression is the process by which genetic information is ultimately transformed into working proteins .the main steps are transcription from dna to mrna , translation from mrna to linear amino acid sequences and folding of these into functional proteins . a certain class of proteins , called transcription factors , can bind to the dna to regulate the rate at which their target genes are transcribed into mrna .gene regulation thus involves a network of macromolecules that mutually influence each other .the production of proteins and mrna is balanced by degradation and dilution .+ [ cols= " < , > , > " , ] the regulatory functions and can either have the same sign ( activator - activator or inhibitor - inhibitor ) or different signs ( activator - inhibitor complex ) . in a boolean model , an activating interaction is implemented by the boolean function `` copy '' , and a repressing interaction by `` invert '' . in the boolean version , the model has only four possible states , 00 , 01 , 10 , and 11 . for the activator - activator or inhibitor - inhibitor case, the boolean model has two fixed points and a cycle .the cycle alternates between 01 and 10 for the activator - activator , and between 00 and 11 for the inhibitor - inhibitor case . for the activator - inhibitor situation, the boolean model has a cycle that involves all four states : .we will see that the continuous model can display a cycle only in the activator - inhibitor situation , and only if the parameters are in the appropriate range .figure [ varied_r_2d ] shows the regions in generalized parameter space where a fixed point is unstable against oscillations .the hopf bifurcations , which occur at the boundary between the white and the gray area , can be determined directly from the characteristic polynomial of the jacobian , which gives rise to only two eigenvalues .oscillations can only occur when the exponent parameters and have different signs , i.e. , for the activator - inhibitor system .furthermore , the product of the hill coefficients and must be large enough . for larger time - scale ratio between mrna and protein dynamics, the exponent parameters must be larger to obtain a hopf bifurcation . for , where the mrna dynamics is quasi always in a steady - state, the activator - inhibitor network shows no oscillatory dynamics for exponent parameters , , corresponding to cooperativity coefficients . even though the generalized method uses normalised variables , the resultscan easily be compared to numerical simulations .figure [ simu ] shows mrna and protein concentrations for different values of ( rows ) with ( first column ) and ( second column ) . as the separation of time scales between mrna and proteins becomes larger , the oscillation frequency increases , but the amplitude of the oscillations decreases ( ) or even vanishes ( ) .in contrast to the boolean model , where the state of a gene jumps instantaneously from `` off '' to `` on '' and back , the concentrations in the continuous model always change smoothly , even in the limit , where the functions and become step functions . for this reason , the oscillation that occurs in the boolean activator - activator ( or inhibitor - inhibitor ) system does not occur in the continuous model . only the two fixed points that are also present in the boolean model occur in the continuous model .for the same reason , the oscillation of the boolean activator - inhibitor system can occur in the continuous model only when and are steep enough in order to drive the concentrations sufficiently fast through the intermediate values .otherwise the system settles at the ( only ) fixed point , which is found at intermediate concentration values .+ + + figure [ varied_r ] shows the generalization to the case of fig . [ varied_r_2d ] .the hopf bifurcations were calculated using the method of resultants .gray areas show regions in parameter space where the fixed point has a complex pair of unstable eigenvalues , with the other two eigenvalues being stable .figure [ ebene ] shows a cross section at for .one sees again that a faster mrna dynamics leads to a smaller oscillatory region in parameter space . just as for the activator - inhibitor network , and must have different signs to obtain a hopf bifurcation , corresponding to opposed regulatory functions .the additional self - input of gene can be activating or repressing for , but must be activating for larger ratio of time scales ( e.g. ) to obtain oscillations .larger values of the exponent parameters , and therefore larger hill coefficients , can make up for a large ratio of time scales between mrna and protein dynamics , as in the system without additional self input .however , biologically realistic values of are in the range . in order to compare the continuous model with the boolean model, we now specify the functions and to be hillcubes . out of the 16 possible boolean functions of two variables for , only 10 actually depend on the values of both variables .we discuss in the following these 10 functions and compare the boolean dynamics with the dynamics due to hillcubes .we restrict ourselves to the case that activates , i.e. , .the case that inhibits can always be mapped on the first case in a boolean model by inverting the states of nodes and changing the sign of the appropriate connections .the boolean functions and and or give rise to the fixed points 00 and 11 in the boolean model .using hillcubes , we obtain for both functions , which means that the continuous model can not have a hopf bifurcation .whether the continuous model has one or two stable fixed points , depends on the parameter values .the boolean functions not and and not or give rise to one fixed point ( 00 and 11 , respectively ) and one cycle involving the states 01 and 01 in the boolean model .using hillcubes , we obtain for both functions , which means that the continuous model can not have a hopf bifurcation .this situation is analogous to the activator - activator loop , where the cycle present in the boolean model does not occur in the continuous model .the boolean functions and not and or not give rise to one fixed point ( 00 and 11 , respectively ) , which is a global attractor , in the boolean model .using hillcubles , we obtain and .the signs of the exponent parameters are such that a hopf bifurcation is possible . when the concentration of transcribed mrna is projected onto a two - dimensional surface ( fig .[ hillcubes ] ) , one can see that the two functions and not and or not produce significant mrna concentrations in the shaded area of fig .[ ebene ] .an optimal condition for a hopf bifurcation are intermediate values of the mrna concentration , since the function is steep in this region .figure [ hc_f2 ] shows regions in parameter space , where is steep , for the function and not ( and for two other functions to be discussed below ) . in order to determine the parameter regions that place the system in the shaded area of fig .[ ebene ] , we varied the parameters , and at fixed protein concentrations and .the values of and were chosen such that the system is placed in the light - colored areas of fig .[ hc_f2 ] , where oscillations are most likely .the result is shown in fig .[ nkr ] , and numerical simulations are shown in fig.[simu_andnot ] : there exists a complex pair of unstable eigenvalues for and .for we obtain sustained oscillations .however , for this set of parameter values , the system is close to a bifurcation : at the periodic orbit grows until it collides with the fixed point .( equations ( [ odes ] ) ) for and . for exists a complex pair of unstable eigenvalues , and we obtain sustained oscillations . at , the oscillation has become unstable , and the periodic orbit grows until it collides with the fixed point .( parameter values : , ) , title="fig:",scaledwidth=23.0% ] ( equations ( [ odes ] ) ) for and . for exists a complex pair of unstable eigenvalues , and we obtain sustained oscillations . at , the oscillation has become unstable , and the periodic orbit grows until it collides with the fixed point .( parameter values : , ) , title="fig:",scaledwidth=23.0% ] the boolean functions nor and nand give rise to a cycle , which is a global attractor , in the boolean model . using hillcubles , we obtain and .the signs of the exponent parameters are such that a hopf bifurcation is possible .the projections of the functions on a two - dimensional surface and the parameter regions that place the system in the shaded area of fig .[ ebene ] are again shown in figs [ hc_f2 ] and [ nkr ] .the results are compared to numerical simulations , shown in fig .[ simu_nor](b ) .in contrast to the previous case , the parameter region where oscillations are possible is much smaller .oscillations are only possible when mrna dynamics is sufficiently slow and when the hill coefficient is above .+ + + finally , we consider the boolean functions xor and xnor .in contrast to the other functions discussed so far , these functions are not canalyzing but change their output value whenever one of the input values is changed . in the boolean model ,these functions give rise to one fixed point and to one cycle that comprises the three remaining states .the signs of and in the continuous model can now be positive or negative , depending on the parameter values .an oscillation is thus possible in principle .based on fig .[ hc_f2 ] we chose fixed point concentrations likely to induce oscillations to calculate the surface of the hopf bifurcation .but in contrast to the two sets of functions discussed in the previous section , we did not find an oscillatory region .by using the method of generalized models to study the dynamics of simple two - gene regulatory network components , we could establish general conditions for the occurrence of hopf bifurcations , which give rise to oscillatory dynamics .apart from the signs of the regulatory interactions , the only relevant parameters in this general description are the hill coefficient and the time scale ratio between mrna and protein dynamics . by comparing the different types of interactions to two - node boolean models, we found that the occurrence of a cycle in the boolean model is neither necessary nor sufficient for the occurrence of an oscillation in the continuous model .our results combine and generalize the findings of several previous studies of such systems .the studies by widder et al . and polynikis et al . were focused on a two - gene system with only two connections , and they found that oscillations can only occur in the activator - inhibitor case , and only for large hill coefficients , or more precisely for .( choice of other parameters : . )polynikis et al . found that the system can be driven through the hopf bifurcation by varying the time scales of the mrna and the proteins .widder et al . emphasize that the domain in parameter space that contains a limit cycle becomes larger with increasing cooperativity as expressed by higher hill coefficients .all these findings are contained concisely in our fig .[ varied_r_2d ] .the two - gene system with three connections was studied by del vecchio , for the case , , , by using generalized hill functions .the extensive bifurcation analysis shows that oscillations occur over a larger parameter range when the mrna dynamics is slower .this result is a special case of our very general result shown in fig .[ varied_r ] .norrell et al . , studied simple rings of four genes , and rings where one node has an additional self - input .they included a time delay to model translation , instead of including additional equations for the mrna concentrations .they found that the time delay must be sufficiently large for oscillations to occur , which agrees with our finding that must be sufficiently small .furthermore , they found that continuous systems can exhibit stable oscillations in cases where boolean reasoning would suggest otherwise , and that on a ring periodic cycles of boolean systems do not exist in continuous systems when the oscillation is not stable against fluctuations in the update time .this finding is a generalization of the result described above that the two - gene activator - activator system has no oscillations . in a different publication , they point out the importance of the detailed form of the continuous functions for obtaining nontrivial dynamical patterns .this finding is related to our finding that the hill coefficents must be sufficiently large for oscillations to occur .mochizuki investigated random networks with larger numbers of nodes .he found that the number of different steady states increases with the number of self - regulatory genes .he furthermore found that many of the periodic oscillations observed in the boolean network are not present in the continuous model of gene regulatory networks and therefore the predictions of boolean models can become unrealistic or too complex for larger networks when compared to those of the corresponding ode models .the latter finding is conform with the findings for the simple two - gene model .when all these investigations are taken together , there appears to be no simple criterion for deciding whether a periodic dynamical behavior in a boolean model has an equivalent in a continuous model . in the examples investigated in this paper ,there are two cases where the periodic dynamics of the boolean model is `` reliable '' in the sense that fluctuations in the update times of the two nodes do not destroy the dynamical cycle .the first case is the activator - inhibitor system , the second is the nor and nand function .the corresponding continuous model has periodic oscillations whenenver the hill coefficients are large enough and the time scale of the mrna is slow enough .however , a reliable oscillation in the boolean model is not a necessary requirement for an oscillation in the continuous model , as shown for the xor and xnor functions ( where the oscillation in the boolean model is not reliable ) , and for the and not and or not functions , where the boolean model has a global fixed point .ieeetr f. jacob and j. monod , j. mol . biol .* 3 * , 318 ( 1961 ) .j. monod , j .-changeaux and f. jacob , j. mol . biol .* 6 * , 306 ( 1963 ) .elowitz and s. leibler , nature * 403 * , 335 ( 2000 ) .s. a. kauffman , j. theor .* 22 * , 437 ( 1969 ) .r. thomas , j. theor . biol . * 42 * , 563 ( 1973 ) .s. bornholdt , science * 310 * , 449 ( 2005 ) .f. li , t. long , y. lu , q. ouyang and c. tang , proc .natl . acad .* 101 * , 4781 ( 2004 ) .s. widder , j. schicho and p. schuster , j. theor .biol . * 246 * , 395 ( 2007 ) .a. polynikis , s.j .hogan and m. di bernardo , j. theor .261 * , 511 ( 2009 ) .d. del vecchio , proceedings of americal control conference , new york , july 2007 .t. gross and u. feudel , phys .e * 73 * , 016205 ( 2006 ) .d. del vecchio and e. d. sontag , proceedings of americal control conference , new york , july 2007 .t. chen , h. l. he and g. m. church , pacific symposium on biocomputing * 4 * , 29 ( 1999 ) .g. yagil and e. yagil , biophys . j.* 11 * , 11 ( 1971 ) . a. v. hill , j. physiol . *40 * , iv ( 1910 ) .u. alon , _ an introduction to systems biology _( chapman & hall / crc , 2007 ) .y. setty , a. e. mayo , m. g. surette and u. alon , proc .* 100 * , 7702 ( 2003 ) .d. wittmann , j. krumsiek , j. saez - rodriguez , d. a. lauffenburger , s. klamt and f. theis , bmc syst .biol . * 3 * , 98 ( 2009 ) .t. gross and u. feudel , physica d * 195 * , 292 ( 2004 ) .t. gross , _ population dynamics : general results from local analysis _ ( der andere verlag , tnning , 2004 ). t. gross and u. feudel , ocean dyn . * 59 * , 417 ( 2009 ) .r. steuer , t. gross , j. selbig and b. blasius , proc .* 103 * , 11868 ( 2006 ) .r. steuer , a. nunes nesi , a. r. fernie , t. gross , b. blasius and j. selbig , bioinformatics * 23 * , 1378 ( 2007 ) .j. norrell , b. samuelsson and j. e. s. socolar , phys .e * 76 * , 046122 ( 2007 ) .j. norrell and j. e. s. socolar , phys .e * 79 * , 061908 ( 2009 ) .a. mochizuki , j. theor . biol . * 236 * , 291 ( 2005 ) .
|
we investigate the dynamical behavior of simple modules composed of two genes with two or three regulating connections . continuous dynamics for mrna and protein concentrations is compared to a boolean model for gene activity . using a generalized method , we study within a single framework different continuous models and different types of regulatory functions , and establish conditions under which the system can display stable oscillations . these conditions concern the time scales , the degree of cooperativity of the regulating interactions , and the signs of the interactions . not all models that show oscillations under boolean dynamics can have oscillations under continuous dynamics , and vice versa .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.